Search Results

Search found 16783 results on 672 pages for 'static typing'.

Page 294/672 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Facebook Connect login button not rendering

    - by tloflin
    I'm trying to implement a Facebook Connect Single Sign-on site. I originally just had a Connect button (<fb:login-button>), which the user had to click every time they wanted to sign in. I now have the auto login and logout features working. That is, my site will detect a logged-in Facebook account and automatically authenticate them if it can find a match to one of my site's user accounts, and automatically deauthenticate if the Facebook session is lost. I also have a manual logout button that will log the user out of both my site and Facebook. All of those are working correctly, but now my original Connect button is intermittently not being rendered correctly. It just shows up as the plain XHTML (ie., it looks like plain text--not a button--and is unclickable), and no XFBML is applied. Here is the basic code: On every page: <body> {...} <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> <script type="text/javascript> FB.init('APIKey', '/xd_receiver.htm'); var isAuth; // isAuth determines if the user is authenticated on my site // it should be true on the logout page, false on the login page FB.ensureInit(function(){ var session = FB.Facebook.apiClient.get_session(); if (session && !isAuth) { PageMethods.FacebookLogin(session.uid, session.session_key, FBLogin, FBLoginFail); // This is an AJAX call that authenticates the user on my site. } else if(!session && isFBAuth) { PageMethods.FacebookLogout(FBLogout, FBLogoutFail); // This is an AJAX call that deauthenticates the user on my site. } // the callback functions do nothing at the moment }); </script> {...} </body> On the login page: (this page is not visible to logged in users) <body> {...} <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> <script type="text/javascript> FB.init('APIKey', '/xd_receiver.htm'); {...} // auto-auth code as on every page </script> <!-- This is the button that fails to render --> <fb:login-button v="2" onlogin="UserSignedIntoFB();" size="large" autologoutlink="true"><fb:intl>Login With Your Facebook Account</fb:intl></fb:login-button> <script type="text/javascript"> function UserSignedIntoFB() { {...} // posts back to the server which authenticates the user on my site & redirects } </script> {...} </body> On the logout page: (this page is not visible to logged out users) <body> {...} <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> <script type="text/javascript> FB.init('APIKey', '/xd_receiver.htm'); {...} // auto-auth code as on every page </script> <script type="text/javascript"> function FBLoggedOut() { {...} // posts back to the server which deauthenticates the user on my site & redirects to login } function Logout() { if (FB.Facebook.apiClient.get_session()) { FB.Connect.logout(FBLoggedOut); // logs out of Facebook if it's a Facebook account return false; } else { return true; // posts back if it's a non-Facebook account & redirects to login } } </script> <a onclick="return Logout();" href="postback_url">Sign Out</a> {...} </body> Some things I've already looked at: The automatic login and logout are working great. I can log in and out of Facebook on another tab and my site will notice the session changes and update accordingly. The logout seems to be working fine: when clicked, it deauthenticates the user and logs them out of Facebook, as intended. The issue seems to usually happen after a Facebook user is logged out, but it happens somewhat intermittently; it might happen before they ever login, and it goes away after a few minutes/refreshes. Some cookies are left over after the login/logout process, but deleting them does not fix the issue. Restarting the browser does not fix the issue. The user is definitely logged out of Facebook and my site when the problem occurs. I've checked for Facebook sessions and site authentication. All external script calls are being served up correctly. I have a suspicion that there's something else I need to be doing upon logout (like clearing session or cookies), but according to everything I've read about Facebook Connect, all I need to do is call the logout function (and deauthenticate on my server-side). I'm really at a loss; does anybody have any ideas what could be wrong?

    Read the article

  • Why won't xattr PECL extension build on 12.10?

    - by Dan Jones
    I was using the xattr pecl extension in 12.04 (in fact, I think since 10.04) without problem. Not surprisingly, I had to reinstall it after upgrading to 12.10 because of the new version of PHP. But now it fails to build, and I can't figure out why. Other PECL extensions have built fine. And I have libattr1 and libattr1-dev installed. Here's the output from the build: downloading xattr-1.1.0.tgz ... Starting to download xattr-1.1.0.tgz (5,204 bytes) .....done: 5,204 bytes 3 source files, building running: phpize Configuring for: PHP Api Version: 20100412 Zend Module Api No: 20100525 Zend Extension Api No: 220100525 libattr library installation dir? [autodetect] : building in /tmp/pear/temp/pear-build-rootdSMx0G/xattr-1.1.0 running: /tmp/pear/temp/xattr/configure --with-xattr checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM -I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib checking for PHP extension directory... /usr/lib/php5/20100525 checking for PHP installed headers prefix... /usr/include/php5 checking if debug is enabled... no checking if zts is enabled... no checking for re2c... re2c checking for re2c version... 0.13.5 (ok) checking for gawk... gawk checking for xattr support... yes, shared checking for xattr files in default path... found in /usr checking for attr_get in -lattr... yes checking how to print strings... printf checking for a sed that does not truncate output... (cached) /bin/sed checking for fgrep... /bin/grep -F checking for ld used by cc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking how to convert x86_64-unknown-linux-gnu file names to x86_64-unknown-linux-gnu format... func_convert_file_noop checking how to convert x86_64-unknown-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for ar... ar checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking for gawk... (cached) gawk checking command to parse /usr/bin/nm -B output from cc object... ok checking for sysroot... no checking for mt... mt checking if mt is a manifest tool... no checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if cc supports -fno-rtti -fno-exceptions... no checking for cc option to produce PIC... -fPIC -DPIC checking if cc PIC flag -fPIC -DPIC works... yes checking if cc static flag -static works... yes checking if cc supports -c -o file.o... yes checking if cc supports -c -o file.o... (cached) yes checking whether the cc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no configure: creating ./config.status config.status: creating config.h config.status: executing libtool commands running: make /bin/bash /tmp/pear/temp/pear-build-rootdSMx0G/xattr-1.1.0/libtool --mode=compile cc -I. -I/tmp/pear/temp/xattr -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootdSMx0G/xattr-1.1.0/include -I/tmp/pear/temp/pear-build-rootdSMx0G/xattr-1.1.0/main -I/tmp/pear/temp/xattr -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM -I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/xattr/xattr.c -o xattr.lo libtool: compile: cc -I. -I/tmp/pear/temp/xattr -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootdSMx0G/xattr-1.1.0/include -I/tmp/pear/temp/pear-build-rootdSMx0G/xattr-1.1.0/main -I/tmp/pear/temp/xattr -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM -I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/xattr/xattr.c -fPIC -DPIC -o .libs/xattr.o /tmp/pear/temp/xattr/xattr.c:50:1: error: unknown type name 'function_entry' /tmp/pear/temp/xattr/xattr.c:51:2: warning: braces around scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:51:2: warning: (near initialization for 'xattr_functions[0]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:51:2: warning: initialization makes integer from pointer without a cast [enabled by default] /tmp/pear/temp/xattr/xattr.c:51:2: warning: (near initialization for 'xattr_functions[0]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:51:2: error: initializer element is not computable at load time /tmp/pear/temp/xattr/xattr.c:51:2: error: (near initialization for 'xattr_functions[0]') /tmp/pear/temp/xattr/xattr.c:51:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:51:2: warning: (near initialization for 'xattr_functions[0]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:51:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:51:2: warning: (near initialization for 'xattr_functions[0]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:51:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:51:2: warning: (near initialization for 'xattr_functions[0]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:51:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:51:2: warning: (near initialization for 'xattr_functions[0]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: warning: braces around scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: warning: (near initialization for 'xattr_functions[1]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: warning: initialization makes integer from pointer without a cast [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: warning: (near initialization for 'xattr_functions[1]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: error: initializer element is not computable at load time /tmp/pear/temp/xattr/xattr.c:52:2: error: (near initialization for 'xattr_functions[1]') /tmp/pear/temp/xattr/xattr.c:52:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: warning: (near initialization for 'xattr_functions[1]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: warning: (near initialization for 'xattr_functions[1]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: warning: (near initialization for 'xattr_functions[1]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:52:2: warning: (near initialization for 'xattr_functions[1]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: warning: braces around scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: warning: (near initialization for 'xattr_functions[2]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: warning: initialization makes integer from pointer without a cast [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: warning: (near initialization for 'xattr_functions[2]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: error: initializer element is not computable at load time /tmp/pear/temp/xattr/xattr.c:53:2: error: (near initialization for 'xattr_functions[2]') /tmp/pear/temp/xattr/xattr.c:53:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: warning: (near initialization for 'xattr_functions[2]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: warning: (near initialization for 'xattr_functions[2]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: warning: (near initialization for 'xattr_functions[2]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:53:2: warning: (near initialization for 'xattr_functions[2]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: warning: braces around scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: warning: (near initialization for 'xattr_functions[3]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: warning: initialization makes integer from pointer without a cast [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: warning: (near initialization for 'xattr_functions[3]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: error: initializer element is not computable at load time /tmp/pear/temp/xattr/xattr.c:54:2: error: (near initialization for 'xattr_functions[3]') /tmp/pear/temp/xattr/xattr.c:54:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: warning: (near initialization for 'xattr_functions[3]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: warning: (near initialization for 'xattr_functions[3]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: warning: (near initialization for 'xattr_functions[3]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:54:2: warning: (near initialization for 'xattr_functions[3]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: warning: braces around scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: warning: (near initialization for 'xattr_functions[4]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: warning: initialization makes integer from pointer without a cast [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: warning: (near initialization for 'xattr_functions[4]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: error: initializer element is not computable at load time /tmp/pear/temp/xattr/xattr.c:55:2: error: (near initialization for 'xattr_functions[4]') /tmp/pear/temp/xattr/xattr.c:55:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: warning: (near initialization for 'xattr_functions[4]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: warning: (near initialization for 'xattr_functions[4]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: warning: (near initialization for 'xattr_functions[4]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:55:2: warning: (near initialization for 'xattr_functions[4]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:56:2: warning: braces around scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:56:2: warning: (near initialization for 'xattr_functions[5]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:56:2: warning: initialization makes integer from pointer without a cast [enabled by default] /tmp/pear/temp/xattr/xattr.c:56:2: warning: (near initialization for 'xattr_functions[5]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:56:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:56:2: warning: (near initialization for 'xattr_functions[5]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:56:2: warning: excess elements in scalar initializer [enabled by default] /tmp/pear/temp/xattr/xattr.c:56:2: warning: (near initialization for 'xattr_functions[5]') [enabled by default] /tmp/pear/temp/xattr/xattr.c:67:2: warning: initialization from incompatible pointer type [enabled by default] /tmp/pear/temp/xattr/xattr.c:67:2: warning: (near initialization for 'xattr_module_entry.functions') [enabled by default] /tmp/pear/temp/xattr/xattr.c: In function 'zif_xattr_set': /tmp/pear/temp/xattr/xattr.c:122:49: error: 'struct _php_core_globals' has no member named 'safe_mode' /tmp/pear/temp/xattr/xattr.c:122:92: error: 'CHECKUID_DISALLOW_FILE_NOT_EXISTS' undeclared (first use in this function) /tmp/pear/temp/xattr/xattr.c:122:92: note: each undeclared identifier is reported only once for each function it appears in /tmp/pear/temp/xattr/xattr.c: In function 'zif_xattr_get': /tmp/pear/temp/xattr/xattr.c:171:49: error: 'struct _php_core_globals' has no member named 'safe_mode' /tmp/pear/temp/xattr/xattr.c:171:92: error: 'CHECKUID_DISALLOW_FILE_NOT_EXISTS' undeclared (first use in this function) /tmp/pear/temp/xattr/xattr.c:187:2: warning: passing argument 4 of 'attr_get' from incompatible pointer type [enabled by default] In file included from /tmp/pear/temp/xattr/xattr.c:37:0: /usr/include/attr/attributes.h:122:12: note: expected 'int *' but argument is of type 'size_t *' /tmp/pear/temp/xattr/xattr.c:198:3: warning: passing argument 4 of 'attr_get' from incompatible pointer type [enabled by default] In file included from /tmp/pear/temp/xattr/xattr.c:37:0: /usr/include/attr/attributes.h:122:12: note: expected 'int *' but argument is of type 'size_t *' /tmp/pear/temp/xattr/xattr.c: In function 'zif_xattr_supported': /tmp/pear/temp/xattr/xattr.c:243:49: error: 'struct _php_core_globals' has no member named 'safe_mode' /tmp/pear/temp/xattr/xattr.c:243:92: error: 'CHECKUID_DISALLOW_FILE_NOT_EXISTS' undeclared (first use in this function) /tmp/pear/temp/xattr/xattr.c: In function 'zif_xattr_remove': /tmp/pear/temp/xattr/xattr.c:288:49: error: 'struct _php_core_globals' has no member named 'safe_mode' /tmp/pear/temp/xattr/xattr.c:288:92: error: 'CHECKUID_DISALLOW_FILE_NOT_EXISTS' undeclared (first use in this function) /tmp/pear/temp/xattr/xattr.c: In function 'zif_xattr_list': /tmp/pear/temp/xattr/xattr.c:337:49: error: 'struct _php_core_globals' has no member named 'safe_mode' /tmp/pear/temp/xattr/xattr.c:337:92: error: 'CHECKUID_DISALLOW_FILE_NOT_EXISTS' undeclared (first use in this function) make: *** [xattr.lo] Error 1 ERROR: `make' failed There seem to be a few errors, but I can't make heads or tails of them. Does this just not work properly in 12.10? That would be a big problem for me.

    Read the article

  • Elusive race condition in Java

    - by nasufara
    I am creating a graphing calculator. In an attempt to squeeze some more performance out of it, I added some multithreaded to the line calculator. Essentially what my current implementation does is construct a thread-safe Queue of X values, then start however many threads it needs, each one calculating a point on the line using the queue to get its values, and then ordering the points using a HashMap when the calculations are done. This implementation works great, and that's not where my race condition is (merely some background info). In examining the performance results from this, I found that the HashMap is a performance bottleneck, since I do that synchronously on one thread. So I figured that ordering each point as its calculated would work best. I tried a PriorityQueue, but that was slower than the HashMap. I ended up creating an algorithm that essentially works like this: I construct a list of X values to calculate, like in my current algorithm. I then copy that list of values into another class, unimaginatively and temporarily named BlockingList, which is responsible for ordering the points as they are calculated. BlockingList contains a put() method, which takes in two BigDecimals as parameters, the first the X value, the second the calculated Y value. put() will only accept a value if the X value is the next one on the list to be accepted in the list of X values, and will block until another thread gives it the next excepted value. For example, since that can be confusing, say I have two threads, Thread-1 and Thread-2. Thread-2 gets the X value 10.0 from the values queue, and Thread-1 gets 9.0. However, Thread-1 completes its calculations first, and calls put() before Thread-2 does. Because BlockingList is expecting to get 10.0 first, and not 9.0, it will block on Thread-1 until Thread-2 finishes and calls put(). Once Thread-2 gives BlockingList 10.0, it notify()s all waiting threads, and expects 9.0 next. This continues until BlockingList gets all of its expected values. (I apologise if that was hard to follow, if you need more clarification, just ask.) As expected by the question title, there is a race condition in here. If I run it without any System.out.printlns, it will sometimes lock because of conflicting wait() and notifyAll()s, but if I put a println in, it will run great. A small implementation of this is included below, and exhibits the same behavior: import java.math.BigDecimal; import java.util.concurrent.ConcurrentLinkedQueue; public class Example { public static void main(String[] args) throws InterruptedException { // Various scaling values, determined based on the graph size // in the real implementation BigDecimal xMax = new BigDecimal(10); BigDecimal xStep = new BigDecimal(0.05); // Construct the values list, from -10 to 10 final ConcurrentLinkedQueue<BigDecimal> values = new ConcurrentLinkedQueue<BigDecimal>(); for (BigDecimal i = new BigDecimal(-10); i.compareTo(xMax) <= 0; i = i.add(xStep)) { values.add(i); } // Contains the calculated values final BlockingList list = new BlockingList(values); for (int i = 0; i < 4; i++) { new Thread() { public void run() { BigDecimal x; // Keep looping until there are no more values while ((x = values.poll()) != null) { PointPair pair = new PointPair(); pair.realX = x; try { list.put(pair); } catch (Exception ex) { ex.printStackTrace(); } } } }.start(); } } private static class PointPair { public BigDecimal realX; } private static class BlockingList { private final ConcurrentLinkedQueue<BigDecimal> _values; private final ConcurrentLinkedQueue<PointPair> _list = new ConcurrentLinkedQueue<PointPair>(); public BlockingList(ConcurrentLinkedQueue<BigDecimal> expectedValues) throws InterruptedException { // Copy the values into a new queue BigDecimal[] arr = expectedValues.toArray(new BigDecimal[0]); _values = new ConcurrentLinkedQueue<BigDecimal>(); for (BigDecimal dec : arr) { _values.add(dec); } } public void put(PointPair item) throws InterruptedException { while (item.realX.compareTo(_values.peek()) != 0) { synchronized (this) { // Block until someone enters the next desired value wait(); } } _list.add(item); _values.poll(); synchronized (this) { notifyAll(); } } } } My question is can anybody help me find the threading error? Thanks!

    Read the article

  • Read XML Files using LINQ to XML and Extension Methods

    - by psheriff
    In previous blog posts I have discussed how to use XML files to store data in your applications. I showed you how to read those XML files from your project and get XML from a WCF service. One of the problems with reading XML files is when elements or attributes are missing. If you try to read that missing data, then a null value is returned. This can cause a problem if you are trying to load that data into an object and a null is read. This blog post will show you how to create extension methods to detect null values and return valid values to load into your object. The XML Data An XML data file called Product.xml is located in the \Xml folder of the Silverlight sample project for this blog post. This XML file contains several rows of product data that will be used in each of the samples for this post. Each row has 4 attributes; namely ProductId, ProductName, IntroductionDate and Price. <Products>  <Product ProductId="1"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="07/01/2010"  Price="799" />  <Product ProductId="2"           ProductName="ASP.Net Jumpstart Samples"           IntroductionDate="05/24/2005"  Price="0" />  ...  ...</Products> The Product Class Just as you create an Entity class to map each column in a table to a property in a class, you should do the same for an XML file too. In this case you will create a Product class with properties for each of the attributes in each element of product data. The following code listing shows the Product class. public class Product : CommonBase{  public const string XmlFile = @"Xml/Product.xml";   private string _ProductName;  private int _ProductId;  private DateTime _IntroductionDate;  private decimal _Price;   public string ProductName  {    get { return _ProductName; }    set {      if (_ProductName != value) {        _ProductName = value;        RaisePropertyChanged("ProductName");      }    }  }   public int ProductId  {    get { return _ProductId; }    set {      if (_ProductId != value) {        _ProductId = value;        RaisePropertyChanged("ProductId");      }    }  }   public DateTime IntroductionDate  {    get { return _IntroductionDate; }    set {      if (_IntroductionDate != value) {        _IntroductionDate = value;        RaisePropertyChanged("IntroductionDate");      }    }  }   public decimal Price  {    get { return _Price; }    set {      if (_Price != value) {        _Price = value;        RaisePropertyChanged("Price");      }    }  }} NOTE: The CommonBase class that the Product class inherits from simply implements the INotifyPropertyChanged event in order to inform your XAML UI of any property changes. You can see this class in the sample you download for this blog post. Reading Data When using LINQ to XML you call the Load method of the XElement class to load the XML file. Once the XML file has been loaded, you write a LINQ query to iterate over the “Product” Descendants in the XML file. The “select” portion of the LINQ query creates a new Product object for each row in the XML file. You retrieve each attribute by passing each attribute name to the Attribute() method and retrieving the data from the “Value” property. The Value property will return a null if there is no data, or will return the string value of the attribute. The Convert class is used to convert the value retrieved into the appropriate data type required by the Product class. private void LoadProducts(){  XElement xElem = null;   try  {    xElem = XElement.Load(Product.XmlFile);     // The following will NOT work if you have missing attributes    var products =         from elem in xElem.Descendants("Product")        orderby elem.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(            elem.Attribute("ProductId").Value),          ProductName = Convert.ToString(            elem.Attribute("ProductName").Value),          IntroductionDate = Convert.ToDateTime(            elem.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(elem.Attribute("Price").Value)        };     lstData.DataContext = products;  }  catch (Exception ex)  {    MessageBox.Show(ex.Message);  }} This is where the problem comes in. If you have any missing attributes in any of the rows in the XML file, or if the data in the ProductId or IntroductionDate is not of the appropriate type, then this code will fail! The reason? There is no built-in check to ensure that the correct type of data is contained in the XML file. This is where extension methods can come in real handy. Using Extension Methods Instead of using the Convert class to perform type conversions as you just saw, create a set of extension methods attached to the XAttribute class. These extension methods will perform null-checking and ensure that a valid value is passed back instead of an exception being thrown if there is invalid data in your XML file. private void LoadProducts(){  var xElem = XElement.Load(Product.XmlFile);   var products =       from elem in xElem.Descendants("Product")      orderby elem.Attribute("ProductName").Value      select new Product      {        ProductId = elem.Attribute("ProductId").GetAsInteger(),        ProductName = elem.Attribute("ProductName").GetAsString(),        IntroductionDate =            elem.Attribute("IntroductionDate").GetAsDateTime(),        Price = elem.Attribute("Price").GetAsDecimal()      };   lstData.DataContext = products;} Writing Extension Methods To create an extension method you will create a class with any name you like. In the code listing below is a class named XmlExtensionMethods. This listing just shows a couple of the available methods such as GetAsString and GetAsInteger. These methods are just like any other method you would write except when you pass in the parameter you prefix the type with the keyword “this”. This lets the compiler know that it should add this method to the class specified in the parameter. public static class XmlExtensionMethods{  public static string GetAsString(this XAttribute attr)  {    string ret = string.Empty;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      ret = attr.Value;    }     return ret;  }   public static int GetAsInteger(this XAttribute attr)  {    int ret = 0;    int value = 0;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      if(int.TryParse(attr.Value, out value))        ret = value;    }     return ret;  }   ...  ...} Each of the methods in the XmlExtensionMethods class should inspect the XAttribute to ensure it is not null and that the value in the attribute is not null. If the value is null, then a default value will be returned such as an empty string or a 0 for a numeric value. Summary Extension methods are a great way to simplify your code and provide protection to ensure problems do not occur when reading data. You will probably want to create more extension methods to handle XElement objects as well for when you use element-based XML. Feel free to extend these extension methods to accept a parameter which would be the default value if a null value is detected, or any other parameters you wish. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose “Tips & Tricks”, then "Read XML Files using LINQ to XML and Extension Methods" from the drop-down. Good Luck with your Coding,Paul D. Sheriff  

    Read the article

  • BlackBerry/J2ME - SAX parse collection of objects with attributes

    - by Changqi Guo
    I have a problem with using the SAX parser to parse a XML file. It is a complex XML file, it is like the following. <Objects> <Object no="1"> <field name="PID">ilives:87877</field> <field name="dc.coverage">Charlottetown</field> <field name="fgs.ownerId">fedoraAdmin</field> </Object> <Object no="2">...... I am confused how to get the names in each field, and how to store the information of each object. import java.util.Enumeration; import java.util.Hashtable; public class XMLObject { private Hashtable mFields = new Hashtable(); private int mN = -1; public int getN() { return mN; } public void setN(int n) { mN = n; } public String getStringField(String key) { return (String) mFields.get(key); } public void setStringField(String key, String value) { mFields.put(key, value); } public String getPID() { return getStringField("PID"); } public void setPID(String pid) { setStringField("PID", pid); } public String getDcCoverage() { return getStringField("dc.coverage"); } public void setDcCoverage(String dcCoverage) { setStringField("dc.coverage", dcCoverage); } public String getFgsOwnerId() { return getStringField("fgs.ownerId"); } public void setFgsOwnerId(String fgsOwnerId) { setStringField("fgs.ownerId", fgsOwnerId); } public String dccreator() { return getStringField("dc.creator"); } public void dccreator(String dccreator) { setStringField("dc.creator", dccreator); } public String getdcformat() { return getStringField("dc.format"); } public void setdcformat(String dcformat) { setStringField("dc.format", dcformat); } public String getdcidentifier() { return getStringField("dc.identifier"); } public void setdcidentifier(String dcidentifier) { setStringField("dc.identifier", dcidentifier); } public String getdclanguage() { return getStringField("dc.language"); } public void setdclanguage(String dclanguage) { setStringField("dc.language", dclanguage); } public String getdcpublisher() { return getStringField("dc.publisher"); } public void setdcpublisher(String dcpublisher) { setStringField("dc.publisher",dcpublisher); } public String getdcsubject() { return getStringField("dc.subject"); } public void setdcsubject(String dcsubject) { setStringField("dc.subject",dcsubject); } public String getdctitle() { return getStringField("dc.title"); } public void setdctitle(String dctitle) { setStringField("dc.title",dctitle); } public String getdctype() { return getStringField("dc.type"); } public void setdctype(String dctype) { setStringField("dc.type",dctype); } public String toString() { StringBuffer sb = new StringBuffer(); sb.append("N:"+mN+";"); Enumeration keys = mFields.keys(); while (keys.hasMoreElements()) { String key = (String) keys.nextElement(); sb.append(key+":"+mFields.get(key)+";"); } return sb.toString(); } } i used the same handler class you provided import java.io.*; import net.rim.device.api.system.Bitmap; import javax.xml.parsers.ParserConfigurationException; import javax.xml.parsers.SAXParser; import javax.xml.parsers.SAXParserFactory; import java.io.InputStream; import net.rim.device.api.ui.component.*; import net.rim.device.api.ui.container.MainScreen; import net.rim.device.api.xml.parsers.*; import org.w3c.dom.*; import org.xml.sax.*; import org.xml.sax.helpers.DefaultHandler; public class xmlparsermainscreen extends MainScreen{ private static String xmlres = "/xml/xml1.xml"; private RichTextField textOutputField; public xmlparsermainscreen() throws ParserConfigurationException, net.rim.device.api.xml.parsers.ParserConfigurationException, IOException { InputStream inputStream = getClass().getResourceAsStream(xmlres); ByteArrayOutputStream baos = new ByteArrayOutputStream(); byte[] buffer = new byte[10000]; int bytesRead = inputStream.read(buffer); while (bytesRead > 0) { baos.write(buffer, 0, bytesRead); bytesRead = inputStream.read(buffer); } baos.close(); String result=baos.toString(); ByteArrayInputStream bais = new ByteArrayInputStream(result.getBytes()); XMLObject[] xmlObjects = getXMLObjects(bais); for (int i = 0; i < xmlObjects.length; i++) { XMLObject o = xmlObjects[i]; textOutputField = new RichTextField(); add(textOutputField); textOutputField.setText(o.toString()); // add(new LabelField(o.toString())); } LabelField resultdis=new LabelField("resultdisplay"); add(resultdis); //textOutputField = new RichTextField(); //add(textOutputField); //textOutputField.setText(result); } static XMLObject[] getXMLObjects(InputStream is) throws ParserConfigurationException { XMLObjectHandler xmlObjectHandler = new XMLObjectHandler(); try { SAXParser parser = SAXParserFactory.newInstance() .newSAXParser(); parser.parse(is, xmlObjectHandler); } catch (ParserConfigurationException e) { e.printStackTrace(); } catch (SAXException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } return xmlObjectHandler.getXMLObjects(); } } import java.io.IOException; import javax.xml.parsers.ParserConfigurationException; import net.rim.device.api.ui.UiApplication; public class xmlparser extends UiApplication { private xmlparser() throws ParserConfigurationException, net.rim.device.api.xml.parsers.ParserConfigurationException, IOException { pushScreen( new xmlparsermainscreen() ); } public static void main( String[] args ) throws ParserConfigurationException, net.rim.device.api.xml.parsers.ParserConfigurationException, IOException { new xmlparser().enterEventDispatcher(); } }

    Read the article

  • Issue with WIC image resizing on ASP.NET MVC 2

    - by Dave
    I am attempting to implement image resizing on user uploads in ASP.NET MVC 2 using a version of the method found: here on asp.net. This works great on my dev machine, but as soon as I put it on my production machine, I start getting the error 'Exception from HRESULT: 0x88982F60' which is supposed to mean that there is an issue decoding the image. However, when I use WICExplorer to open the image, it looks ok. I've also tried this with dozens of images of various sources and still get the error (though possible, I doubt all of them are corrupted). Here is the relevant code (with my debugging statements in there): MVC Controller [Authorize, HttpPost] public ActionResult Upload(string file) { //Check file extension string fx = file.Substring(file.LastIndexOf('.')).ToLowerInvariant(); string key; if (ConfigurationManager.AppSettings["ImageExtensions"].Contains(fx)) { key = Guid.NewGuid().ToString() + fx; } else { return Json("extension not found"); } //Check file size if (Request.ContentLength <= Convert.ToInt32(ConfigurationManager.AppSettings["MinImageSize"]) || Request.ContentLength >= Convert.ToInt32(ConfigurationManager.AppSettings["MaxImageSize"])) { return Json("content length out of bounds: " + Request.ContentLength); } ImageResizerResult irr, irr2; //Check if this image is coming from FF, Chrome or Safari (XHR) HttpPostedFileBase hpf = null; if (Request.Files.Count <= 0) { //Scale and encode image and thumbnail irr = ImageResizer.CreateMaxSizeImage(Request.InputStream); irr2 = ImageResizer.CreateThumbnail(Request.InputStream); } //Or IE else { hpf = Request.Files[0] as HttpPostedFileBase; if (hpf.ContentLength == 0) return Json("hpf.length = 0"); //Scale and encode image and thumbnail irr = ImageResizer.CreateMaxSizeImage(hpf.InputStream); irr2 = ImageResizer.CreateThumbnail(hpf.InputStream); } //Check if image and thumbnail encoded and scaled correctly if (irr == null || irr.output == null || irr2 == null || irr2.output == null) { if (irr != null && irr.output != null) irr.output.Dispose(); if (irr2 != null && irr2.output != null) irr2.output.Dispose(); if(irr == null) return Json("irr null"); if (irr2 == null) return Json("irr2 null"); if (irr.output == null) return Json("irr.output null. irr.error = " + irr.error); if (irr2.output == null) return Json("irr2.output null. irr2.error = " + irr2.error); } if (irr.output.Length > Convert.ToInt32(ConfigurationManager.AppSettings["MaxImageSize"]) || irr2.output.Length > Convert.ToInt32(ConfigurationManager.AppSettings["MaxImageSize"])) { if(irr.output.Length > Convert.ToInt32(ConfigurationManager.AppSettings["MaxImageSize"])) return Json("irr.output.Length > maximage size. irr.output.Length = " + irr.output.Length + ", irr.error = " + irr.error); return Json("irr2.output.Length > maximage size. irr2.output.Length = " + irr2.output.Length + ", irr2.error = " + irr2.error); } //Store scaled and encoded image and thumbnail .... return Json("success"); } The code is always failing when checking if the output stream is null (i.e. irr.output == null is true). ImageResizerResult and ImageResizer public class ImageResizerResult : IDisposable { public MemoryIStream output; public int width; public int height; public string error; public void Dispose() { output.Dispose(); } } public static class ImageResizer { private static Object thislock = new Object(); public static ImageResizerResult CreateMaxSizeImage(Stream input) { uint maxSize = Convert.ToUInt32(ConfigurationManager.AppSettings["MaxImageDimension"]); try { lock (thislock) { // Read the source image var photo = ByteArrayFromStream(input); var factory = (IWICComponentFactory)new WICImagingFactory(); var inputStream = factory.CreateStream(); inputStream.InitializeFromMemory(photo, (uint)photo.Length); var decoder = factory.CreateDecoderFromStream(inputStream, null, WICDecodeOptions.WICDecodeMetadataCacheOnDemand); var frame = decoder.GetFrame(0); // Compute target size uint width, height, outWidth, outHeight; frame.GetSize(out width, out height); if (width > height) { //Check if width is greater than maxSize if (width > maxSize) { outWidth = maxSize; outHeight = height * maxSize / width; } //Width is less than maxSize, so use existing dimensions else { outWidth = width; outHeight = height; } } else { //Check if height is greater than maxSize if (height > maxSize) { outWidth = width * maxSize / height; outHeight = maxSize; } //Height is less than maxSize, so use existing dimensions else { outWidth = width; outHeight = height; } } // Prepare output stream to cache file var outputStream = new MemoryIStream(); // Prepare JPG encoder var encoder = factory.CreateEncoder(Consts.GUID_ContainerFormatJpeg, null); encoder.Initialize(outputStream, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache); // Prepare output frame IWICBitmapFrameEncode outputFrame; var arg = new IPropertyBag2[1]; encoder.CreateNewFrame(out outputFrame, arg); var propBag = arg[0]; var propertyBagOption = new PROPBAG2[1]; propertyBagOption[0].pstrName = "ImageQuality"; propBag.Write(1, propertyBagOption, new object[] { 0.85F }); outputFrame.Initialize(propBag); outputFrame.SetResolution(96, 96); outputFrame.SetSize(outWidth, outHeight); // Prepare scaler var scaler = factory.CreateBitmapScaler(); scaler.Initialize(frame, outWidth, outHeight, WICBitmapInterpolationMode.WICBitmapInterpolationModeFant); // Write the scaled source to the output frame outputFrame.WriteSource(scaler, new WICRect { X = 0, Y = 0, Width = (int)outWidth, Height = (int)outHeight }); outputFrame.Commit(); encoder.Commit(); return new ImageResizerResult { output = outputStream, height = (int)outHeight, width = (int)outWidth }; } } catch (Exception e) { return new ImageResizerResult { error = "Create maxsizeimage = " + e.Message }; } } } Thoughts on where this is going wrong? Thanks in advance for the effort.

    Read the article

  • LWJGL Voxel game, glDrawArrays

    - by user22015
    I've been learning about 3D for a couple days now. I managed to create a chunk (8x8x8). Add optimization so it only renders the active and visible blocks. Then I added so it only draws the faces which don't have a neighbor. Next what I found from online research was that it is better to use glDrawArrays to increase performance. So I restarted my little project. Render an entire chunck, add optimization so it only renders active and visible blocks. But now I want to add so it only draws the visible faces while using glDrawArrays. This is giving me some trouble with calling glDrawArrays because I'm passing a wrong count parameter. > # A fatal error has been detected by the Java Runtime Environment: > # > # EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0000000006e31a03, pid=1032, tid=3184 > # Stack: [0x00000000023a0000,0x00000000024a0000], sp=0x000000000249ef70, free space=1019k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [ig4icd64.dll+0xa1a03] Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j org.lwjgl.opengl.GL11.nglDrawArrays(IIIJ)V+0 j org.lwjgl.opengl.GL11.glDrawArrays(III)V+20 j com.vox.block.Chunk.render()V+410 j com.vox.ChunkManager.render()V+30 j com.vox.Game.render()V+11 j com.vox.GameHandler.render()V+12 j com.vox.GameHandler.gameLoop()V+15 j com.vox.Main.main([Ljava/lang/StringV+13 v ~StubRoutines::call_stub public class Chunk { public final static int[] DIM = { 8, 8, 8}; public final static int CHUNK_SIZE = (DIM[0] * DIM[1] * DIM[2]); Block[][][] blocks; private int index; private int vBOVertexHandle; private int vBOColorHandle; public Chunk(int index) { this.index = index; vBOColorHandle = GL15.glGenBuffers(); vBOVertexHandle = GL15.glGenBuffers(); blocks = new Block[DIM[0]][DIM[1]][DIM[2]]; for(int x = 0; x < DIM[0]; x++){ for(int y = 0; y < DIM[1]; y++){ for(int z = 0; z < DIM[2]; z++){ blocks[x][y][z] = new Block(); } } } } public void render(){ Block curr; FloatBuffer vertexPositionData2 = BufferUtils.createFloatBuffer(CHUNK_SIZE * 6 * 12); FloatBuffer vertexColorData2 = BufferUtils.createFloatBuffer(CHUNK_SIZE * 6 * 12); int counter = 0; for(int x = 0; x < DIM[0]; x++){ for(int y = 0; y < DIM[1]; y++){ for(int z = 0; z < DIM[2]; z++){ curr = blocks[x][y][z]; boolean[] neightbours = validateNeightbours(x, y, z); if(curr.isActive() && !neightbours[6]) { float[] arr = curr.createCube((index*DIM[0]*Block.BLOCK_SIZE*2) + x*2, y*2, z*2, neightbours); counter += arr.length; vertexPositionData2.put(arr); vertexColorData2.put(createCubeVertexCol(curr.getCubeColor())); } } } } vertexPositionData2.flip(); vertexPositionData2.flip(); FloatBuffer vertexPositionData = BufferUtils.createFloatBuffer(vertexColorData2.position()); FloatBuffer vertexColorData = BufferUtils.createFloatBuffer(vertexColorData2.position()); for(int i = 0; i < vertexPositionData2.position(); i++) vertexPositionData.put(vertexPositionData2.get(i)); for(int i = 0; i < vertexColorData2.position(); i++) vertexColorData.put(vertexColorData2.get(i)); vertexColorData.flip(); vertexPositionData.flip(); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOVertexHandle); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vertexPositionData, GL15.GL_STATIC_DRAW); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOColorHandle); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vertexColorData, GL15.GL_STATIC_DRAW); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); GL11.glPushMatrix(); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOVertexHandle); GL11.glVertexPointer(3, GL11.GL_FLOAT, 0, 0L); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOColorHandle); GL11.glColorPointer(3, GL11.GL_FLOAT, 0, 0L); System.out.println("Counter " + counter); GL11.glDrawArrays(GL11.GL_LINE_LOOP, 0, counter); GL11.glPopMatrix(); //blocks[r.nextInt(DIM[0])][2][r.nextInt(DIM[2])].setActive(false); } //Random r = new Random(); private float[] createCubeVertexCol(float[] CubeColorArray) { float[] cubeColors = new float[CubeColorArray.length * 4 * 6]; for (int i = 0; i < cubeColors.length; i++) { cubeColors[i] = CubeColorArray[i % CubeColorArray.length]; } return cubeColors; } private boolean[] validateNeightbours(int x, int y, int z) { boolean[] bools = new boolean[7]; bools[6] = true; bools[6] = bools[6] && (bools[0] = y > 0 && y < DIM[1]-1 && blocks[x][y+1][z].isActive());//top bools[6] = bools[6] && (bools[1] = y > 0 && y < DIM[1]-1 && blocks[x][y-1][z].isActive());//bottom bools[6] = bools[6] && (bools[2] = z > 0 && z < DIM[2]-1 && blocks[x][y][z+1].isActive());//front bools[6] = bools[6] && (bools[3] = z > 0 && z < DIM[2]-1 && blocks[x][y][z-1].isActive());//back bools[6] = bools[6] && (bools[4] = x > 0 && x < DIM[0]-1 && blocks[x+1][y][z].isActive());//left bools[6] = bools[6] && (bools[5] = x > 0 && x < DIM[0]-1 && blocks[x-1][y][z].isActive());//right return bools; } } public class Block { public static final float BLOCK_SIZE = 1f; public enum BlockType { Default(0), Grass(1), Dirt(2), Water(3), Stone(4), Wood(5), Sand(6), LAVA(7); int BlockID; BlockType(int i) { BlockID=i; } } private boolean active; private BlockType type; public Block() { this(BlockType.Default); } public Block(BlockType type){ active = true; this.type = type; } public float[] getCubeColor() { switch (type.BlockID) { case 1: return new float[] { 1, 1, 0 }; case 2: return new float[] { 1, 0.5f, 0 }; case 3: return new float[] { 0, 0f, 1f }; default: return new float[] {0.5f, 0.5f, 1f}; } } public float[] createCube(float x, float y, float z, boolean[] neightbours){ int counter = 0; for(boolean b : neightbours) if(!b) counter++; float[] array = new float[counter*12]; int offset = 0; if(!neightbours[0]){//top array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[1]){//bottom array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } if(!neightbours[2]){//front array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[3]){//back array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } if(!neightbours[4]){//left array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[5]){//right array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } return Arrays.copyOf(array, offset); } public boolean isActive() { return active; } public void setActive(boolean active) { this.active = active; } public BlockType getType() { return type; } public void setType(BlockType type) { this.type = type; } } I highlighted the code I'm concerned about in this following screenshot: - http://imageshack.us/a/img820/7606/18626782.png - (Not allowed to upload images yet) I know the code is a mess but I'm just testing stuff so I wasn't really thinking about it.

    Read the article

  • How to Customize the Internet Explorer 8 Title Bar

    - by Mysticgeek
    If you’re looking for a way to personalize IE 8, one method is to customize the Title Bar. Here we look at a simple Registry hack that will get the job done. The Internet Explorer Title Bar is displayed on the top of the browser with the site name followed by Windows Internet Explorer by default. If you have a small office you might want to change it to the company name, or just change it something more personal at home.   Customize the IE 8 Title Bar Note: Before making any changes to the Registry, make sure to back it up. The first thing we need to do is open the Registry by typing regedit into the Search box in the Start Menu and hit Enter. With the Registry open, navigate to HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main. Then create a new String Value and name it Window Title. Right-click on the Window Title String and enter in whatever you want to display on the Title Bar in the Value data field and click OK. When you’re done, you should see the new String called Windows Title with whatever you entered in as the value. Close out of the Registry. Restart or launch Internet Explorer and you’ll now see your new text in the Title Bar. If you want to change it to something else, just go in and modify the Value data. If you want to switch it back to the default, just go back in and delete the string we created. A lot of times you’ll see corporate branding already in the title bar from your ISP or some computer company. To get rid of it, check out The Geek’s article on how to remove it. This should work with other versions of Internet Explorer as well. Similar Articles Productive Geek Tips Remove ISP Text or Corporate Branding from Internet Explorer Title BarReset All Internet Explorer 8 Settings to Fix Stability ProblemsMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPDisable Third Party Extensions in Internet ExplorerToggle Flash On or Off in Internet Explorer the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall

    Read the article

  • Prime Numbers Code Help

    - by andrew
    Hello Everybody, I am suppose to "write a Java program that reads a positive integer n from standard input, then prints out the first n prime number." It's divided into 3 parts. 1st: This function will return true or false according to whether m is prime or composite. The array argument P will contain a sufficient number of primes to do the testing. Specifically, at the time isPrime() is called, array P must contain (at least) all primes p in the range 2 p m . For instance, to test m = 53 for primality, one must do successive trial divisions by 2, 3, 5, and 7. We go no further since 11 53 . Thus a precondition for the function call isPrime(53, P) is that P[0] = 2 , P[1] = 3 , P[2] = 5, and P[3] = 7 . The return value in this case would be true since all these divisions fail. Similarly to test m =143 , one must do trial divisions by 2, 3, 5, 7, and 11 (since 13 143 ). The precondition for the function call isPrime(143, P) is therefore P[0] = 2 , P[1] = 3 , P[2] = 5, P[3] = 7 , and P[4] =11. The return value in this case would be false since 11 divides 143. Function isPrime() should contain a loop that steps through array P, doing trial divisions. This loop should terminate when 2 either a trial division succeeds, in which case false is returned, or until the next prime in P is greater than m , in which case true is returned. Then there is the "main function" • Check that the user supplied exactly one command line argument which can be interpreted as a positive integer n. If the command line argument is not a single positive integer, your program will print a usage message as specified in the examples below, then exit. • Allocate array Primes[] of length n and initialize Primes[0] = 2 . • Enter a loop which will discover subsequent primes and store them as Primes[1] , Primes[2], Primes[3] , ……, Primes[n -1] . This loop should contain an inner loop which walks through successive integers and tests them for primality by calling function isPrime() with appropriate arguments. • Print the contents of array Primes[] to stdout, 10 to a line separated by single spaces. In other words Primes[0] through Primes[9] will go on line 1, Primes[10] though Primes[19] will go on line 2, and so on. Note that if n is not a multiple of 10, then the last line of output will contain fewer than 10 primes. The last function is called "usage" which I am not sure how to execute this! Your program will include a function called Usage() having signature static void Usage() that prints this message to stderr, then exits. Thus your program will contain three functions in all: main(), isPrime(), and Usage(). Each should be preceded by a comment block giving it’s name, a short description of it’s operation, and any necessary preconditions (such as those for isPrime().) And hear is my code, but I am having a bit of a problem and could you guys help me fix it? If I enter the number "5" it gives me the prime numbers which are "6,7,8,9" which doesn't make much sense. import java.util.; import java.io.; import java.lang.*; public class PrimeNumber { static boolean isPrime(int m, int[] P){ int squarert = Math.round( (float)Math.sqrt(m) ); int i = 2; boolean ans=false; while ((i<=squarert) & (ans==false)) { int c= P[i]; if (m%c==0) ans= true; else ans= false; i++; } /* if(ans ==true) ans=false; else ans=true; return ans; } ///****main public static void main(String[] args ) { Scanner in= new Scanner(System.in); int input= in.nextInt(); int i, j; int squarert; boolean ans = false; int userNum; int remander = 0; System.out.println("input: " + input); int[] prime = new int[input]; prime[0]= 2; for(i=1; i ans = isPrime(j,prime); j++;} prime[i] = j; } //prnt prime System.out.println("The first " + input + " prime number(s) are: "); for(int r=0; r }//end of main } Thanks for the help

    Read the article

  • How to add Editext and Button to Efficent Adapter which has a Icon and text

    - by jayanthgande
    Hi, I want to create a layout in such a way that on top edittext and button should be there in one row. The search text I enter in editext and click on search button. Then I want to display a custom list view where each row contains image and text.(As per the API demos example list14 I have tried). But when I run the application, button and edittext are being added to each row (i.e., Each row contains a image, text, editext, button. Can Any one guide how to resolve this issue. Below is my xml file: <!-- <FrameLayout android:layout_width="wrap_content" android:layout_height="0dip" android:layout_weight="1"></FrameLayout> --> <ImageView android:id="@+id/icon" android:layout_width="48dip" android:layout_height="48dip" /> <TextView android:id="@+id/text" android:layout_gravity="center_vertical" android:layout_width="0dip" android:layout_weight="1.0" android:layout_height="wrap_content" /> <!-- <EditText android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/prdsearchtb" android:text="@string/tb_prd_search_lbl"></EditText> --> <!-- <TableLayout android:id="@+id/TableLayout01" android:layout_width="fill_parent" android:layout_height="wrap_content"> <TableRow> --> <Button android:id="@+id/prdsrcbutton" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/btn_lbl_prd_search" android:layout_x="2px" android:layout_y="410px"></Button> <!-- </TableRow> </TableLayout> -- and Java File: /** * */ package org.techdata.activity; import android.app.AlertDialog; import android.app.ListActivity; import android.content.Context; import android.content.Intent; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.os.Bundle; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.BaseAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.ImageView; import android.widget.ListView; import android.widget.TextView; /** * @author jayanthg * */ public class ProductSearch extends ListActivity { private static class ProductSearchAdapter extends BaseAdapter { private LayoutInflater mInflater; private Bitmap mIcon1; private Bitmap mIcon2; public ProductSearchAdapter(Context context) { mInflater = LayoutInflater.from(context); // Icons bound to the rows. mIcon1 = BitmapFactory.decodeResource(context.getResources(), R.drawable.icon48x48_1); mIcon2 = BitmapFactory.decodeResource(context.getResources(), R.drawable.icon48x48_2); } @Override public int getCount() { return DATA.length; } @Override public Object getItem(int position) { return position; } @Override public long getItemId(int position) { return position; } @Override public View getView(final int position, View convertView, ViewGroup parent) { ViewHolder holder; Button btn=null; if (convertView == null) { convertView = mInflater.inflate(R.layout.productsearch, null); // Creates a ViewHolder and store references to the two children // views // we want to bind data to. holder = new ViewHolder(); holder.text = (TextView) convertView.findViewById(R.id.text); holder.icon = (ImageView) convertView.findViewById(R.id.icon); btn=(Button)convertView.findViewById(R.id.prdsrcbutton); convertView.setTag(holder); } else { // Get the ViewHolder back to get fast access to the TextView // and the ImageView. holder = (ViewHolder) convertView.getTag(); } // Bind the data efficiently with the holder. holder.text.setText(DATA[position]); holder.icon.setImageBitmap((position & 1) == 1 ? mIcon1 : mIcon2); holder.icon.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Log.i("image", " u clicked on icon Position" + position); } }); holder.text.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Log.i("Text", " u clicked on text Position" + position); } }); btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Log.i("Button","U clicked on button"); } }); return convertView; } static class ViewHolder { TextView text; ImageView icon; } private static final String[] DATA = { "Abbaye de Belloc", "Abbaye du Mont des Cats" }; } ListView product_search_list; Button srch_btn; EditText srch_text; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setListAdapter(new ProductSearchAdapter(this)); // setContentView(R.layout.productsearch); // getListView().setEmptyView(findViewById(R.id.text)); // srch_text = (EditText)findViewById(R.id.prdsearchtb); // srch_btn = (Button) findViewById(R.id.prdsearchtb); // srch_btn.setOnClickListener(new View.OnClickListener() { // // @Override // public void onClick(View v) { // callProductSearchAdapter(); // // } // }); } void callProductSearchAdapter() { setListAdapter(new ProductSearchAdapter(this)); } private void createDialog(String title, String text, final Intent i) { if (i == null) { AlertDialog ad = new AlertDialog.Builder(this).setIcon( R.drawable.alert_dialog_icon).setPositiveButton("Ok", null) .setTitle(title).setMessage(text).create(); ad.show(); } } } Regards: Jayanth

    Read the article

  • Duplication in parallel inheritance hierarchies

    - by flamingpenguin
    Using an OO language with static typing (like Java), what are good ways to represent the following model invariant without large amounts of duplication. I have two (actually multiple) flavours of the same structure. Each flavour requires its own (unique to that flavour data) on each of the objects within that structure as well as some shared data. But within each instance of the aggregation only objects of one (the same) flavour are allowed. FooContainer can contain FooSources and FooDestinations and associations between the "Foo" objects BarContainer can contain BarSources and BarDestinations and associations between the "Bar" objects interface Container() { List<? extends Source> sources(); List<? extends Destination> destinations(); List<? extends Associations> associations(); } interface FooContainer() extends Container { List<? extends FooSource> sources(); List<? extends FooDestination> destinations(); List<? extends FooAssociations> associations(); } interface BarContainer() extends Container { List<? extends BarSource> sources(); List<? extends BarDestination> destinations(); List<? extends BarAssociations> associations(); } interface Source { String getSourceDetail1(); } interface FooSource extends Source { String getSourceDetail2(); } interface BarSource extends Source { String getSourceDetail3(); } interface Destination { String getDestinationDetail1(); } interface FooDestination extends Destination { String getDestinationDetail2(); } interface BarDestination extends Destination { String getDestinationDetail3(); } interface Association { Source getSource(); Destination getDestination(); } interface FooAssociation extends Association { FooSource getSource(); FooDestination getDestination(); String getFooAssociationDetail(); } interface BarAssociation extends Association { BarSource getSource(); BarDestination getDestination(); String getBarAssociationDetail(); }

    Read the article

  • Android app to remote control Samsung Smart TVs

    - by Gopinath
    Smart TV Remote is an unofficial Android app that lets you control Samsung Smart TVs connected over a local WiFi network. This app comes very handy when you want to control your TV which is not in line of sight of your TV remote control or just want to use your mobile phone/tablet to control the TV. Setting up a TV  is very easy using auto scan feature . Once the TV is setup, you are all set to start using the app as a remote control. A traditional remote controls makes use of infra red technology and it needs to be in the line of sight of the TV receiver to work. But this app make use of WiFi technology which give it flexibility of controlling the TV as long as the mobile & TV is connected to WiFi network. It just works even if the TV is behind a wall. The App provides very easy to use options to switch between channels and separate remotes with media controls, smart hub features and a numeric key pad if you want to navigate to a channel through its number. The App also provides a home screen widget with volume controls and channel navigation options. I use  this App to control Samsung E Series Smart Tv at home and it works very well. I’m impressed by the ease at which it allows to setup a TV, support for multiple TVs, controlling the TV though I’m not in the line of sight and using volume buttons of smart phone to control volume of TV. What’s annoying and missing with the app As advertised the app works very well in controlling Samsung TVs (B-, C-, D- E-, and F-Series) except it is very painful to move mouse pointer while browsing web on TV. When you try to move mouse pointer using the App, it mouse painfully slow especially. I gave us using the app to control mouse pointer after trying couple of times. I installed this App thinking that it may help me browse web on Smart TVs, especially a key board support to type web urls. App does not supports entering text either while browsing web or searching through Smart TV apps like YouTube, App Store etc. Developers of this App never advertised keyboard support so no complaints about this. But it would be very helpful if the developers allow this app use as a keyboard and rescue me from the pain of typing text using TV Remote control. Overall this is a very nice app and worth trying out – Download Smart TV Remote from Google Play

    Read the article

  • Databind a datagrid header combobox from ViewModel

    - by Mike
    I've got a Datagrid with a column defined as this: <Custom:DataGridTextColumn HeaderStyle="{StaticResource ComboBoxHeader}" Width="Auto" Header="Type" Binding="{Binding Path=Type}" IsReadOnly="True" /> The ComboBoxHeader style is defined in a resource dictionary as this: <Style x:Key="ComboBoxHeader" TargetType="{x:Type my:DataGridColumnHeader}"> <Setter Property="VerticalContentAlignment" Value="Center"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type my:DataGridColumnHeader}"> <ControlTemplate.Resources> <Storyboard x:Key="ShowFilterControl"> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="filterComboBox" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Visible}"/> <DiscreteObjectKeyFrame KeyTime="00:00:00.5000000" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> <ColorAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="filterComboBox" Storyboard.TargetProperty="(Panel.Background).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00" Value="Transparent"/> <SplineColorKeyFrame KeyTime="00:00:00.5000000" Value="White"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="HideFilterControl"> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="filterComboBox" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00.4000000" Value="{x:Static Visibility.Collapsed}"/> </ObjectAnimationUsingKeyFrames> <ColorAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="filterComboBox" Storyboard.TargetProperty="(UIElement.OpacityMask).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00" Value="Black"/> <SplineColorKeyFrame KeyTime="00:00:00.4000000" Value="#00000000"/> </ColorAnimationUsingKeyFrames> </Storyboard> </ControlTemplate.Resources> <my:DataGridHeaderBorder x:Name="dataGridHeaderBorder" Margin="0" VerticalAlignment="Top" Height="31" IsClickable="{TemplateBinding CanUserSort}" IsHovered="{TemplateBinding IsMouseOver}" IsPressed="{TemplateBinding IsPressed}" SeparatorBrush="{TemplateBinding SeparatorBrush}" SeparatorVisibility="{TemplateBinding SeparatorVisibility}" SortDirection="{TemplateBinding SortDirection}" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Padding="{TemplateBinding Padding}" Grid.ColumnSpan="1"> <Grid x:Name="grid" Width="Auto" Height="Auto" RenderTransformOrigin="0.5,0.5"> <Grid.RenderTransform> <TransformGroup> <ScaleTransform/> <SkewTransform/> <RotateTransform/> <TranslateTransform/> </TransformGroup> </Grid.RenderTransform> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <ContentPresenter x:Name="contentPresenter" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" ContentStringFormat="{TemplateBinding ContentStringFormat}" ContentTemplate="{TemplateBinding ContentTemplate}"> <ContentPresenter.Content> <MultiBinding Converter="{StaticResource headerConverter}"> <MultiBinding.Bindings> <Binding ElementName="filterComboBox" Path="Text" /> <Binding RelativeSource="{RelativeSource TemplatedParent}" Path="Content" /> </MultiBinding.Bindings> </MultiBinding> </ContentPresenter.Content> </ContentPresenter> <ComboBox ItemsSource="{Binding Path=Types}" x:Name="filterComboBox" VerticalAlignment="Center" HorizontalAlignment="Right" MinWidth="20" Height="Auto" OpacityMask="Black" Visibility="Collapsed" Text="" Grid.Column="0" Grid.ColumnSpan="1"/> </Grid> </my:DataGridHeaderBorder> <ControlTemplate.Triggers> <Trigger Property="IsMouseOver" Value="True"> <Trigger.EnterActions> <BeginStoryboard x:Name="ShowFilterControl_BeginStoryboard" Storyboard="{StaticResource ShowFilterControl}"/> <StopStoryboard BeginStoryboardName="HideFilterControl_BeginShowFilterControl"/> </Trigger.EnterActions> <Trigger.ExitActions> <BeginStoryboard x:Name="HideFilterControl_BeginShowFilterControl" Storyboard="{StaticResource HideFilterControl}"/> <StopStoryboard BeginStoryboardName="ShowFilterControl_BeginStoryboard"/> </Trigger.ExitActions> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="Background"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF0067AD" Offset="1"/> <GradientStop Color="#FF003355" Offset="0.5"/> <GradientStop Color="#FF78A8C9" Offset="0"/> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="Foreground" Value="White"/> <Setter Property="BorderBrush"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#D8000000" Offset="0.664"/> <GradientStop Color="#7F003355" Offset="1"/> </LinearGradientBrush> </Setter.Value> </Setter> <Setter Property="FontWeight" Value="Bold"/> <Setter Property="BorderThickness" Value="1,1,1,0"/> <Setter Property="HorizontalContentAlignment" Value="Center"/> <Setter Property="Padding" Value="5,0"/> </Style> As you can see, I'm trying to databind the combobox's ItemsSource to Types, but this doesn't work. The list is in my ViewModel that is being applied to my page, how would I specify in this style that is in my resource dictionary that I want to bind to a source in my viewmodel.

    Read the article

  • Silverlight with using of DependencyProperty and ControlTemplate

    - by Taras
    Hello everyone, I'm starting to study Silverlight 3 and Visual Studio 2008. I've been trying to create Windows sidebar gadget with button controls that look like circles (I have couple of "roundish" png images). The behavior, I want, is the following: when the mouse hovers over the image it gets larger a bit. When we click on it, then it goes down and up. When we leave the button's image it becomes normal sized again. Cause I'm going to have couple of such controls I decided to implement custom control: like a button but with image and no content text. My problem is that I'm not able to set my custom properties in my template and style. What am I doing wrong? My teamplate control with three additional properties: namespace SilverlightGadgetDocked { public class ActionButton : Button { /// <summary> /// Gets or sets the image source of the button. /// </summary> public String ImageSource { get { return (String)GetValue(ImageSourceProperty); } set { SetValue(ImageSourceProperty, value); } } /// <summary> /// Gets or sets the ratio that is applied to the button's size /// when the mouse control is over the control. /// </summary> public Double ActiveRatio { get { return (Double)GetValue(ActiveRatioProperty); } set { SetValue(ActiveRatioProperty, value); } } /// <summary> /// Gets or sets the offset - the amount of pixels the button /// is shifted when the the mouse control is over the control. /// </summary> public Double ActiveOffset { get { return (Double)GetValue(ActiveOffsetProperty); } set { SetValue(ActiveOffsetProperty, value); } } public static readonly DependencyProperty ImageSourceProperty = DependencyProperty.Register("ImageSource", typeof(String), typeof(ActionButton), new PropertyMetadata(String.Empty)); public static readonly DependencyProperty ActiveRatioProperty = DependencyProperty.Register("ActiveRatio", typeof(Double), typeof(ActionButton), new PropertyMetadata(1.0)); public static readonly DependencyProperty ActiveOffsetProperty = DependencyProperty.Register("ActiveOffset", typeof(Double), typeof(ActionButton), new PropertyMetadata(0)); public ActionButton() { this.DefaultStyleKey = typeof(ActionButton); } } } And XAML with styles: <UserControl x:Class="SilverlightGadgetDocked.Page" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:SilverlightGadgetDocked="clr-namespace:SilverlightGadgetDocked" Width="130" Height="150" SizeChanged="UserControl_SizeChanged" MouseEnter="UserControl_MouseEnter" MouseLeave="UserControl_MouseLeave"> <Canvas> <Canvas.Resources> <Style x:Name="ActionButtonStyle" TargetType="SilverlightGadgetDocked:ActionButton"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="SilverlightGadgetDocked:ActionButton"> <Grid> <Image Source="{TemplateBinding ImageSource}" Width="{TemplateBinding Width}" Height="{TemplateBinding Height}"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style x:Key="DockedActionButtonStyle" TargetType="SilverlightGadgetDocked:ActionButton" BasedOn="{StaticResource ActionButtonStyle}"> <Setter Property="Canvas.ZIndex" Value="2"/> <Setter Property="Canvas.Top" Value="10"/> <Setter Property="Width" Value="30"/> <Setter Property="Height" Value="30"/> <Setter Property="ActiveRatio" Value="1.15"/> <Setter Property="ActiveOffset" Value="5"/> </Style> <Style x:Key="InfoActionButtonStyle" TargetType="SilverlightGadgetDocked:ActionButton" BasedOn="{StaticResource DockedActionButtonStyle}"> <Setter Property="ImageSource" Value="images/action_button_info.png"/> </Style> <Style x:Key="ReadActionButtonStyle" TargetType="SilverlightGadgetDocked:ActionButton" BasedOn="{StaticResource DockedActionButtonStyle}"> <Setter Property="ImageSource" Value="images/action_button_read.png"/> </Style> <Style x:Key="WriteActionButtonStyle" TargetType="SilverlightGadgetDocked:ActionButton" BasedOn="{StaticResource DockedActionButtonStyle}"> <Setter Property="ImageSource" Value="images/action_button_write.png"/> </Style> </Canvas.Resources> <StackPanel> <Image Source="images/background_docked.png" Stretch="None"/> <TextBlock Foreground="White" MaxWidth="130" HorizontalAlignment="Right" VerticalAlignment="Top" Padding="0,0,5,0" Text="Name" FontSize="13"/> </StackPanel> <SilverlightGadgetDocked:ActionButton Canvas.Left="15" Style="{StaticResource InfoActionButtonStyle}" MouseLeftButtonDown="imgActionInfo_MouseLeftButtonDown"/> <SilverlightGadgetDocked:ActionButton Canvas.Left="45" Style="{StaticResource ReadActionButtonStyle}" MouseLeftButtonDown="imgActionRead_MouseLeftButtonDown"/> <SilverlightGadgetDocked:ActionButton Canvas.Left="75" Style="{StaticResource WriteAtionButtonStyle}" MouseLeftButtonDown="imgActionWrite_MouseLeftButtonDown"/> </Canvas> </UserControl> And Visual Studio reports that "Invalid attribute value ActiveRatio for property Property" in line 27 <Setter Property="ActiveRatio" Value="1.15"/> VERY BIG THANKS!!!

    Read the article

  • What is causing my spacebar to randomly stop working?

    - by Chris Billington
    A couple of times a day, I'll be typing something and realise I can't type spaces. Usually the cursor will flicker instead when I press the spacebar, and I can type all other letters as far as I can tell. If I'm in a terminal the cursor turns from a solid square to an empty square until I release the spacebar. For some reason, restarting compiz with alt-F2 compiz fixes it, until it next occurs. I can still copy and paste spaces from sources that already have them, and I can still insert spaces with ctrl-shift-u, 20, enter. This has been happening for a while, since before I upgraded to maverick, but it feels like its beceoming more frequent. There really doesn't seem to be any kind of a pattern to it. I'm using 64 bit ubuntu 10.10 on a system76 panp7 laptop. Any ideas how I might troubleshoot? EDIT: using xev, normally a spacebar registers as: KeyPress event, serial 36, synthetic NO, window 0x5600001, root 0x101, subw 0x0, time 26488647, (88,403), root:(748,458), state 0x10, keycode 65 (keysym 0x20, space), same_screen YES, XLookupString gives 1 bytes: (20) " " XmbLookupString gives 1 bytes: (20) " " XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x5600001, root 0x101, subw 0x0, time 26488729, (88,403), root:(748,458), state 0x10, keycode 65 (keysym 0x20, space), same_screen YES, XLookupString gives 1 bytes: (20) " " XFilterEvent returns: False But when it's stopped behaving a press of the spacebar instead gives the three events: FocusOut event, serial 36, synthetic NO, window 0x5600001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 36, synthetic NO, window 0x5600001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 36, synthetic NO, window 0x0, keys: 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 FURTHER EDIT: Ok, so I think I've solved the problem, and by that I mean I now know which package to file a bug against. I have a hot corner which initiates a window picker, and I've customised the window picking so that left click goes to a window, right click closes one and spacebar zooms in on one. When I go to this hot corner, compiz must take control of my spacebar, and clearly isn't giving it back when I leave the window picker. So I'll be filing a bug against compiz. reported:here

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

  • How To Uninstall, Disable, and Remove Windows Defender. Also, How Turn it Off

    - by The Geek
    If you’re already running a full anti-malware suite, you might not even realize that Windows Defender is already installed with Windows, and is probably wasting precious resources. Here’s how to get rid of it. Now, just to be clear, we’re not saying that we hate Windows Defender. Some spyware protection is better than none, and it’s built in and free! But… if you are already running something that provides great anti-malware protection, there’s no need to have more than one application running at a time. Disable Windows Defender Unfortunately, Windows Defender is completely built into Windows, and you’re not going to actually uninstall it. What we can do, however, is disable it. Open up Windows Defender, go to Tools on the top menu, and then click on Options. Now click on Administrator on the left-hand pane, uncheck the box for “Use this program”, and click the Save button. You will then be told that the program is turned off. Awesome! If you really, really want to make sure that it never comes back, you can also open up the Services panel through Control Panel, or by typing services.msc into the Start Menu search or run boxes. Find Windows Defender in the list and double-click on it… And then you can change Startup type to Disabled. Now again, we’re not necessarily advocating that you get rid of Windows Defender. Make sure you keep yourself protected from malware! Similar Articles Productive Geek Tips Stop an Application from Running at Startup in Windows VistaRemove "Map Network Drive" Menu Item from Windows Vista or XPManually Remove Skype Extension from FirefoxUninstall, Disable, or Delete Internet Explorer 8 from Windows 7Still Useful in Vista: Startup Control Panel TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista

    Read the article

  • Developing custom MBeans to manage J2EE Applications (Part III)

    - by philippe Le Mouel
    This is the third and final part in a series of blogs, that demonstrate how to add management capability to your own application using JMX MBeans. In Part I we saw: How to implement a custom MBean to manage configuration associated with an application. How to package the resulting code and configuration as part of the application's ear file. How to register MBeans upon application startup, and unregistered them upon application stop (or undeployment). How to use generic JMX clients such as JConsole to browse and edit our application's MBean. In Part II we saw: How to add localized descriptions to our MBean, MBean attributes, MBean operations and MBean operation parameters. How to specify meaningful name to our MBean operation parameters. We also touched on future enhancements that will simplify how we can implement localized MBeans. In this third and last part, we will re-write our MBean to simplify how we added localized descriptions. To do so we will take advantage of the functionality we already described in part II and that is now part of WebLogic 10.3.3.0. We will show how to take advantage of WebLogic's localization support to localize our MBeans based on the client's Locale independently of the server's Locale. Each client will see MBean descriptions localized based on his/her own Locale. We will show how to achieve this using JConsole, and also using a sample programmatic JMX Java client. The complete code sample and associated build files for part III are available as a zip file. The code has been tested against WebLogic Server 10.3.3.0 and JDK6. To build and deploy our sample application, please follow the instruction provided in Part I, as they also apply to part III's code and associated zip file. Providing custom descriptions take II In part II we localized our MBean descriptions by extending the StandardMBean class and overriding its many getDescription methods. WebLogic 10.3.3.0 similarly to JDK 7 can automatically localize MBean descriptions as long as those are specified according to the following conventions: Descriptions resource bundle keys are named according to: MBean description: <MBeanInterfaceClass>.mbean MBean attribute description: <MBeanInterfaceClass>.attribute.<AttributeName> MBean operation description: <MBeanInterfaceClass>.operation.<OperationName> MBean operation parameter description: <MBeanInterfaceClass>.operation.<OperationName>.<ParameterName> MBean constructor description: <MBeanInterfaceClass>.constructor.<ConstructorName> MBean constructor parameter description: <MBeanInterfaceClass>.constructor.<ConstructorName>.<ParameterName> We also purposely named our resource bundle class MBeanDescriptions and included it as part of the same package as our MBean. We already followed the above conventions when creating our resource bundle in part II, and our default resource bundle class with English descriptions looks like: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "MBean used to manage persistent application properties"}, {"PropertyConfigMXBean.attribute.Properties", "Properties associated with the running application"}, {"PropertyConfigMXBean.operation.setProperty", "Create a new property, or change the value of an existing property"}, {"PropertyConfigMXBean.operation.setProperty.key", "Name that identify the property to set."}, {"PropertyConfigMXBean.operation.setProperty.value", "Value for the property being set"}, {"PropertyConfigMXBean.operation.getProperty", "Get the value for an existing property"}, {"PropertyConfigMXBean.operation.getProperty.key", "Name that identify the property to be retrieved"} }; } } We have now also added a resource bundle with French localized descriptions: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions_fr extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "Manage proprietes sauvegarde dans un fichier disque."}, {"PropertyConfigMXBean.attribute.Properties", "Proprietes associee avec l'application en cour d'execution"}, {"PropertyConfigMXBean.operation.setProperty", "Construit une nouvelle proprietee, ou change la valeur d'une proprietee existante."}, {"PropertyConfigMXBean.operation.setProperty.key", "Nom de la propriete dont la valeur est change."}, {"PropertyConfigMXBean.operation.setProperty.value", "Nouvelle valeur"}, {"PropertyConfigMXBean.operation.getProperty", "Retourne la valeur d'une propriete existante."}, {"PropertyConfigMXBean.operation.getProperty.key", "Nom de la propriete a retrouver."} }; } } So now we can just remove the many getDescriptions methods from our MBean code, and have a much cleaner: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Map; import java.util.HashMap; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig extends StandardMBean implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; private static Map operationsParamNames_ = null; static { operationsParamNames_ = new HashMap(); operationsParamNames_.put("setProperty", new String[] {"key", "value"}); operationsParamNames_.put("getProperty", new String[] {"key"}); } public PropertyConfig(String relativePath) throws Exception { super(PropertyConfigMXBean.class , true); props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} protected String getParameterName(MBeanOperationInfo op, MBeanParameterInfo param, int sequence) { return operationsParamNames_.get(op.getName())[sequence]; } } The only reason we are still extending the StandardMBean class, is to override the default values for our operations parameters name. If this isn't a concern, then one could just write the following code: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; public PropertyConfig(String relativePath) throws Exception { props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} } Note: The above would also require changing the operations parameters name in the resource bundle classes. For instance: PropertyConfigMXBean.operation.setProperty.key would become: PropertyConfigMXBean.operation.setProperty.p0 Client based localization When accessing our MBean using JConsole started with the following command line: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -debug We see that our MBean descriptions are localized according to the WebLogic's server Locale. English in this case: Note: Consult Part I for information on how to use JConsole to browse/edit our MBean. Now if we specify the client's Locale as part of the JConsole command line as follow: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -J-Dweblogic.management.remote.locale=fr-FR -debug We see that our MBean descriptions are now localized according to the specified client's Locale. French in this case: We use the weblogic.management.remote.locale system property to specify the Locale that should be associated with the cient's JMX connections. The value is composed of the client's language code and its country code separated by the - character. The country code is not required, and can be omitted. For instance: -Dweblogic.management.remote.locale=fr We can also specify the client's Locale using a programmatic client as demonstrated below: package blog.wls.jmx.appmbean.client; import javax.management.MBeanServerConnection; import javax.management.ObjectName; import javax.management.MBeanInfo; import javax.management.remote.JMXConnector; import javax.management.remote.JMXServiceURL; import javax.management.remote.JMXConnectorFactory; import java.util.Hashtable; import java.util.Set; import java.util.Locale; public class JMXClient { public static void main(String[] args) throws Exception { JMXConnector jmxCon = null; try { JMXServiceURL serviceUrl = new JMXServiceURL( "service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime"); System.out.println("Connecting to: " + serviceUrl); // properties associated with the connection Hashtable env = new Hashtable(); env.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); String[] credentials = new String[2]; credentials[0] = "weblogic"; credentials[1] = "weblogic"; env.put(JMXConnector.CREDENTIALS, credentials); // specifies the client's Locale env.put("weblogic.management.remote.locale", Locale.FRENCH); jmxCon = JMXConnectorFactory.newJMXConnector(serviceUrl, env); jmxCon.connect(); MBeanServerConnection con = jmxCon.getMBeanServerConnection(); Set mbeans = con.queryNames( new ObjectName( "blog.wls.jmx.appmbean:name=myAppProperties,type=PropertyConfig,*"), null); for (ObjectName mbeanName : mbeans) { System.out.println("\n\nMBEAN: " + mbeanName); MBeanInfo minfo = con.getMBeanInfo(mbeanName); System.out.println("MBean Description: "+minfo.getDescription()); System.out.println("\n"); } } finally { // release the connection if (jmxCon != null) jmxCon.close(); } } } The above client code is part of the zip file associated with this blog, and can be run using the provided client.sh script. The resulting output is shown below: $ ./client.sh Connecting to: service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime MBEAN: blog.wls.jmx.appmbean:type=PropertyConfig,name=myAppProperties MBean Description: Manage proprietes sauvegarde dans un fichier disque. $ Miscellaneous Using Description annotation to specify MBean descriptions Earlier we have seen how to name our MBean descriptions resource keys, so that WebLogic 10.3.3.0 automatically uses them to localize our MBean. In some cases we might want to implicitly specify the resource key, and resource bundle. For instance when operations are overloaded, and the operation name is no longer sufficient to uniquely identify a single operation. In this case we can use the Description annotation provided by WebLogic as follow: import weblogic.management.utils.Description; @Description(resourceKey="myapp.resources.TestMXBean.description", resourceBundleBaseName="myapp.resources.MBeanResources") public interface TestMXBean { @Description(resourceKey="myapp.resources.TestMXBean.threshold.description", resourceBundleBaseName="myapp.resources.MBeanResources" ) public int getthreshold(); @Description(resourceKey="myapp.resources.TestMXBean.reset.description", resourceBundleBaseName="myapp.resources.MBeanResources") public int reset( @Description(resourceKey="myapp.resources.TestMXBean.reset.id.description", resourceBundleBaseName="myapp.resources.MBeanResources", displayNameKey= "myapp.resources.TestMXBean.reset.id.displayName.description") int id); } The Description annotation should be applied to the MBean interface. It can be used to specify MBean, MBean attributes, MBean operations, and MBean operation parameters descriptions as demonstrated above. Retrieving the Locale associated with a JMX operation from the MBean code There are several cases where it is necessary to retrieve the Locale associated with a JMX call from the MBean implementation. For instance this can be useful when localizing exception messages. This can be done as follow: import weblogic.management.mbeanservers.JMXContextUtil; ...... // some MBean method implementation public String setProperty(String key, String value) throws IOException { Locale callersLocale = JMXContextUtil.getLocale(); // use callersLocale to localize Exception messages or // potentially some return values such a Date .... } Conclusion With this last part we conclude our three part series on how to write MBeans to manage J2EE applications. We are far from having exhausted this particular topic, but we have gone a long way and are now capable to take advantage of the latest functionality provided by WebLogic's application server to write user friendly MBeans.

    Read the article

  • T-SQL Dynamic SQL and Temp Tables

    - by George
    It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure. However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed. A simple 2 table scenario: I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items. I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order. Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree. The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned. What I was hoping to do was this: Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this: TempSQL = " INSERT INTO #ItemsToQuery OrderId, ItemsId FROM Orders, Items WHERE Orders.OrderID = Items.OrderId AND /* Some unpredictable Order filters go here */ AND /* Some unpredictable Items filters go here */ " Then, I would call a stored procedure, CREATE PROCEDURE GetItemsAndOrders(@tempSql as text) Execute (@tempSQL) --to create the #ItemsToQuery table SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery) SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery) The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client. 3 around come to mind but I'm look for a better one: 1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure. 2) I could perform all the queries from the client. The first would be something like this: SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables. I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one. If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.

    Read the article

  • HttpPost works in Java project, not in Android

    - by dave.c
    I've written some code for my Android device to login to a web site over https and parse some data out of the resulting pages. An HttpGet happens first to get some info needed for login, then an HttpPost to do the actual login process. The code below works great in a Java project within Eclipse which has the following Jar files on the build path: httpcore-4.1-beta2.jar, httpclient-4.1-alpha2.jar, httpmime-4.1-alpha2.jar, commons-logging-1.1.1.jar. public static MyBean gatherData(String username, String password) { MyBean myBean = new MyBean(); try { HttpResponse response = doHttpGet(URL_PAGE_LOGIN, null, null); System.out.println("Got login page"); String content = EntityUtils.toString(response.getEntity()); String token = ContentParser.getToken(content); String cookie = getCookie(response); System.out.println("Performing login"); System.out.println("token = "+token +" || cookie = "+cookie); response = doLoginPost(username,password,cookie, token); int respCode = response.getStatusLine().getStatusCode(); if (respCode != 302) { System.out.println("ERROR: not a 302 redirect!: code is \""+ respCode+"\""); if (respCode == 200) { System.out.println(getHeaders(response)); System.out.println(EntityUtils.toString(response.getEntity()).substring(0, 500)); } } else { System.out.println("Logged in OK, loading account home"); // redirect handler and rest of parse removed } }catch (Exception e) { System.out.println("ERROR in gatherdata: "+e.toString()); e.printStackTrace(); } return myBean; } private static HttpResponse doHttpGet(String url, String cookie, String referrer) { try { HttpClient client = new DefaultHttpClient(); client.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); client.getParams().setParameter(CoreProtocolPNames.HTTP_CONTENT_CHARSET, "UTF-8"); HttpGet httpGet = new HttpGet(url); httpGet.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); httpGet.setHeader(HEADER_USER_AGENT,HEADER_USER_AGENT_VALUE); if (referrer != null && !referrer.equals("")) httpGet.setHeader(HEADER_REFERER,referrer); if (cookie != null && !cookie.equals("")) httpGet.setHeader(HEADER_COOKIE,cookie); return client.execute(httpGet); } catch (Exception e) { e.printStackTrace(); throw new ConnectException("Failed to read content from response"); } } private static HttpResponse doLoginPost(String username, String password, String cookie, String token) throws ClientProtocolException, IOException { try { HttpClient client = new DefaultHttpClient(); client.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); client.getParams().setParameter(CoreProtocolPNames.HTTP_CONTENT_CHARSET, "UTF-8"); HttpPost post = new HttpPost(URL_LOGIN_SUBMIT); post.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); post.setHeader(HEADER_USER_AGENT,HEADER_USER_AGENT_VALUE); post.setHeader(HEADER_REFERER, URL_PAGE_LOGIN); post.setHeader(HEADER_COOKIE, cookie); post.setHeader("Content-Type","application/x-www-form-urlencoded"); List<NameValuePair> formParams = new ArrayList<NameValuePair>(); formParams.add(new BasicNameValuePair("org.apache.struts.taglib.html.TOKEN", token)); formParams.add(new BasicNameValuePair("showLogin", "true")); formParams.add(new BasicNameValuePair("upgrade", "")); formParams.add(new BasicNameValuePair("username", username)); formParams.add(new BasicNameValuePair("password", password)); formParams.add(new BasicNameValuePair("submit", "Secure+Log+in")); UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formParams,HTTP.UTF_8); post.setEntity(entity); return client.execute(post); } catch (Exception e) { e.printStackTrace(); throw new ConnectException("ERROR in doLoginPost(): "+e.getMessage()); } } The server (which is not under my control) returns a 302 redirect when the login was successful, and 200 if it fails and re-loads the login page. When run with the above Jar files I get the 302 redirect, however if I run the exact same code from an Android project with the 1.6 Android Jar file on the build path I get the 200 response from the server. I get the same 200 response when running the code on my 2.2 device. My android application has internet permissions, and the HttpGet works fine. I'm assuming that the problem lies in the fact that HttpPost (or some other class) is different in some significant way between the Android Jar version and the newer Apache versions. I've tried adding the Apache libraries to the build path of the Android project, but due to the duplicate classes I get messages like: INFO/dalvikvm(390): DexOpt: not resolving ambiguous class 'Lorg/apache/http/impl/client/DefaultHttpClient;' in the log. I've also tried using a MultipartEntity instead of the UrlEncodedFormEntity but I get the same 200 result. So, I have a few questions: - Can I force the code running under android to use the newer Apache libraries in preference to the Android versions? - If not, does anyone have any ideas how can I alter my code so that it works with the Android Jar? - Are there any other, totally different approaches to doing an HttpPost in Android? - Any other ideas? I've read a lot of posts and code but I'm not getting anywhere. I've been stuck on this for a couple of days and I'm at a loss how to get the thing to work, so I'll try anything at this point. Thanks in advance.

    Read the article

  • New security configuration flag in UCM PS3

    - by kyle.hatlestad
    While the recent Patch Set 3 (PS3) release was mostly focused on bug fixes and such, a new configuration flag was added for security. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was possible to enable these ACL's without having the Collaboration Manager component enabled. But it took some special instructions (see technote# 603148.1) and added some extraneous pieces still related to Collaboration Manager. When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they've been cleaned up a bit and a new configuration flag has been added to simply turn on the ACL fields and none of the other collaboration bits. To enable ACLs: UseEntitySecurity=true Along with this configuration flag to turn ACLs on, you also need to define which Security Groups will honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. SpecialAuthGroups=HumanResources,Legal,Marketing Save the settings and restart the instance. Upon restart, two new metadata fields will be created: xClbraUserList, xClbraAliasList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • 11.10 suddenly became 2d

    - by Sergey Kravchenya
    After today's update ubuntu became 2d despite the fact that I logged in under "ubuntu" not "ubuntu 2d". Also some other issues appeared (e.g. my shortcuts rolled back, num pad doesn't work, caps lock key don't have any effect on case, when I'm typing in dash - it doubles every letter - 'n' -- 'nn' ) Here is update log: Setting up google-chrome-stable (17.0.963.83-r127885) ... Setting up libavutil51 (4:0.8.1-1~ppa1) ... Setting up libpostproc52 (4:0.8.1-1~ppa1) ... Setting up libswscale2 (4:0.8.1-1~ppa1) ... Setting up libavcodec53 (4:0.8.1-1~ppa1) ... Setting up libavformat53 (4:0.8.1-1~ppa1) ... Setting up libbluray1 (1:0.2.2-1~ppa1) ... Setting up libmp3lame0 (3.99.5+repack1-3~ppa1) ... Setting up libx264-120 (2:0.120.2171+git01f7a33-3~ppa1) ... Setting up libxvidcore4 (2:1.3.2-9~ppa1) ... Setting up thunderbird (11.0+build1-0ubuntu0.11.10.1) ... Installing new version of config file /etc/apport/blacklist.d/thunderbird ... Setting up thunderbird-globalmenu (11.0+build1-0ubuntu0.11.10.1) ... Setting up thunderbird-gnome-support (11.0+build1-0ubuntu0.11.10.1) ... Setting up thunderbird-locale-en (1:11.0+build1-0ubuntu0.11.10.1) ... Setting up thunderbird-locale-en-gb (1:11.0+build1-0ubuntu0.11.10.1) ... Setting up thunderbird-locale-en-us (1:11.0+build1-0ubuntu0.11.10.1) ... Setting up thunderbird-locale-ru (1:11.0+build1-0ubuntu0.11.10.1) ... Setting up ubuntu-tweak (0.6.2-1~oneiric1) ... Setting up xul-ext-calendar-timezones (1.3+build1-0ubuntu0.11.10.1) ... Setting up xul-ext-lightning (1.3+build1-0ubuntu0.11.10.1) ... Setting up xul-ext-gdata-provider (1.3+build1-0ubuntu0.11.10.1) ... What can I do to find what cause of this problem? What additional information should I provide to help you help me?

    Read the article

  • How to include multiple XML files in a single XML file for deserialization by XmlSerializer in .NET

    - by harrydev
    Hi, is it possible to use the XmlSerializer in .NET to load an XML file which includes other XML files? And how? This, in order to share XML state easily in two "parent" XML files, e.g. AB and BC in below. Example: using System; using System.IO; using System.Xml.Serialization; namespace XmlSerializerMultipleFilesTest { [Serializable] public class A { public int Value { get; set; } } [Serializable] public class B { public double Value { get; set; } } [Serializable] public class C { public string Value { get; set; } } [Serializable] public class AB { public A A { get; set; } public B B { get; set; } } [Serializable] public class BC { public B B { get; set; } public C C { get; set; } } class Program { public static void Serialize<T>(T data, string filePath) { using (var writer = new StreamWriter(filePath)) { var xmlSerializer = new XmlSerializer(typeof(T)); xmlSerializer.Serialize(writer, data); } } public static T Deserialize<T>(string filePath) { using (var reader = new StreamReader(filePath)) { var xmlSerializer = new XmlSerializer(typeof(T)); return (T)xmlSerializer.Deserialize(reader); } } static void Main(string[] args) { const string fileNameA = @"A.xml"; const string fileNameB = @"B.xml"; const string fileNameC = @"C.xml"; const string fileNameAB = @"AB.xml"; const string fileNameBC = @"BC.xml"; var a = new A(){ Value = 42 }; var b = new B(){ Value = Math.PI }; var c = new C(){ Value = "Something rotten" }; Serialize(a, fileNameA); Serialize(b, fileNameB); Serialize(c, fileNameC); // How can AB and BC be deserialized from single // files which include two of the A, B or C files. // Using ideally something like: var ab = Deserialize<AB>(fileNameAB); var bc = Deserialize<BC>(fileNameBC); // That is, so that A, B, C xml file // contents are shared across these two } } } Thus, the A, B, C files contain the following: A.xml: <?xml version="1.0" encoding="utf-8"?> <A xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Value>42</Value> </A> B.xml: <?xml version="1.0" encoding="utf-8"?> <B xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Value>3.1415926535897931</Value> </B> C.xml: <?xml version="1.0" encoding="utf-8"?> <C xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Value>Something rotten</Value> </C> And then the "parent" XML files would contain a XML include file of some sort (I have not been able to find anything like this), such as: AB.xml: <?xml version="1.0" encoding="utf-8"?> <AB xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <A include="A.xml"/> <B include="B.xml"/> </AB> BC.xml: <?xml version="1.0" encoding="utf-8"?> <BC xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <B include="B.xml"/> <C include="C.xml"/> </BC> Of course, I guess this can be solved by implementing IXmlSerializer for AB and BC, but I was hoping there was an easier solution or a generic solution with which classes themselves only need the [Serializable] attribute and nothing else. That is, the split into multiple files is XML only and handled by XmlSerializer itself or a custom generic serializer on top of this. I know this should be somewhat possible with app.config (as in http://stackoverflow.com/questions/480538/use-xml-includes-or-config-references-in-app-config-to-include-other-config-files), but I would prefer a solution based on XmlSerializer. Thanks.

    Read the article

  • Accessing PerSession service simultaneously in WCF using C#

    - by krishna555
    1.) I have a main method Processing, which takes string as an arguments and that string contains some x number of tasks. 2.) I have another method Status, which keeps track of first method by using two variables TotalTests and CurrentTest. which will be modified every time with in a loop in first method(Processing). 3.) When more than one client makes a call parallely to my web service to call the Processing method by passing a string, which has different tasks will take more time to process. so in the mean while clients will be using a second thread to call the Status method in the webservice to get the status of the first method. 4.) when point number 3 is being done all the clients are supposed to get the variables(TotalTests,CurrentTest) parallely with out being mixed up with other client requests. 5.) The code that i have provided below is getting mixed up variables results for all the clients when i make them as static. If i remove static for the variables then clients are just getting all 0's for these 2 variables and i am unable to fix it. Please take a look at the below code. [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)] public class Service1 : IService1 { public int TotalTests = 0; public int CurrentTest = 0; public string Processing(string OriginalXmlString) { XmlDocument XmlDoc = new XmlDocument(); XmlDoc.LoadXml(OriginalXmlString); this.TotalTests = XmlDoc.GetElementsByTagName("TestScenario").Count; //finding the count of total test scenarios in the given xml string this.CurrentTest = 0; while(i<10) { ++this.CurrentTest; i++; } } public string Status() { return (this.TotalTests + ";" + this.CurrentTest); } } server configuration <wsHttpBinding> <binding name="WSHttpBinding_IService1" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="true" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> client configuration <wsHttpBinding> <binding name="WSHttpBinding_IService1" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="true" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> Below mentioned is my client code class Program { static void Main(string[] args) { Program prog = new Program(); Thread JavaClientCallThread = new Thread(new ThreadStart(prog.ClientCallThreadRun)); Thread JavaStatusCallThread = new Thread(new ThreadStart(prog.StatusCallThreadRun)); JavaClientCallThread.Start(); JavaStatusCallThread.Start(); } public void ClientCallThreadRun() { XmlDocument doc = new XmlDocument(); doc.Load(@"D:\t72CalculateReasonableWithdrawal_Input.xml"); bool error = false; Service1Client Client = new Service1Client(); string temp = Client.Processing(doc.OuterXml, ref error); } public void StatusCallThreadRun() { int i = 0; Service1Client Client = new Service1Client(); string temp; while (i < 10) { temp = Client.Status(); Thread.Sleep(1500); Console.WriteLine("TotalTestScenarios;CurrentTestCase = {0}", temp); i++; } } } Can any one please help.

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >