Search Results

Search found 6357 results on 255 pages for 'generic relations'.

Page 222/255 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • .NET vs Windows 8

    - by Simon Cooper
    So, day 1 of DevWeek. Lots and lots of Windows 8 and WinRT, as you would expect. The keynote had some actual content in it, fleshed out some of the details of how your apps linked into the Metro infrastructure, and confirmed that there would indeed be an enterprise version of the app store available for Metro apps.) However, that's, not what I want to focus this post on. What I do want to focus on is this: Windows 8 does not make .NET developers obsolete. Phew! .NET in the New Ecosystem In all the hype around Windows 8 the past few months, a lot of developers have got the impression that .NET has been sidelined in Windows 8; C++ and COM is back in vogue, and HTML5 + JavaScript is the New Way of writing applications. You know .NET? It's yesterday's tech. Enter the 21st Century and write <div>! However, after speaking to people at the conference, and after a couple of talks by Dave Wheeler on the innards of WinRT and how .NET interacts with it, my views on the coming operating system have changed somewhat. To summarize what I've picked up, in no particular order (none of this is official, just my sense of what's been said by various people): Metro apps do not replace desktop apps. That is, Windows 8 fully supports .NET desktop applications written for every other previous version of Windows, and will continue to do so in the forseeable future. There are some apps that simply do not fit into Metro. They do not fit into the touch-based paradigm, and never will. Traditional desktop support is not going away anytime soon. The reason Silverlight has been hidden in all the Metro hype is that Metro is essentially based on Silverlight design principles. Silverlight developers will have a much easier time writing Metro apps than desktop developers, as they would already be used to all the principles of sandboxing and separation introduced with Silverlight. It's desktop developers who are going to have to adapt how they work. .NET + XAML is equal to HTML5 + JS in importance. Although the underlying WinRT system is built on C++ & COM, most application development will be done either using .NET or HTML5. Both systems have their own wrapper around the underlying WinRT infrastructure, hiding the implementation details. The CLR is unchanged; it's still the .NET 4 CLR, running IL in .NET assemblies. The thing that changes between desktop and Metro is the class libraries, which have more in common with the Silverlight libraries than the desktop libraries. In Metro, although all the types look and behave the same to callers, some of the core BCL types are now wrappers around their WinRT equivalents. These wrappers are then enhanced using standard .NET types and code to produce the Metro .NET class libraries. You can't simply port a desktop app into Metro. The underlying file IO, network, timing and database access is either completely different or simply missing. Similarly, although the UI is programmed using XAML, the behaviour of the Metro XAML is different to WPF or Silverlight XAML. Furthermore, the new design principles and touch-based interface for Metro applications demand a completely new UI. You will be able to re-use sections of your app encapsulating pure program logic, but everything else will need to be written from scratch. Microsoft has taken the opportunity to remove a whole raft of types and methods from the Metro framework that are obsolete (non-generic collections) or break the sandbox (synchronous APIs); if you use these, you will have to rewrite to use the alternatives, if they exist at all, to move your apps to Metro. If you want to write public WinRT components in .NET, there are some quite strict rules you have to adhere to. But the compilers know about these rules; you can write them in C# or VB, and the compilers will tell you when you do something that isn't allowed and deal with the translation to WinRT metadata rather than .NET assemblies. It is possible to write a class library that can be used in Metro and desktop applications. However, you need to be very careful not to use types that are available in one but not the other. One can imagine developers writing their own abstraction around file IO and UIs (MVVM anyone?) that can be implemented differently in Metro and desktop, but look the same within your shared library. So, if you're a .NET developer, you have a lot less to worry about. .NET is a viable platform on Metro, and traditional desktop apps are not going away. You don't have to learn HTML5 and JavaScript if you don't want to. Hurray!

    Read the article

  • Searching for the last logon of users in Active Directory

    - by Robert May
    I needed to clean out a bunch of old accounts at Veracity Solutions, and wanted to delete those that hadn’t used their account in more than a year. I found that AD has a property on objects called the lastLogonTimestamp.  However, this value isn’t exposed to you in any useful fashion.  Sure, you can pull up ADSI Edit and and eventually get to it there, but it’s painful. I spent some time searching, and discovered that there’s not much out there to help, so I thought a blog post showing exactly how to get at this information would be in order. Basically, what you end up doing is using System.DirectoryServices to search for accounts and then filtering those for users, doing some conversion and such to make it happen.  Basically, the end result of this is that you get a list of users with their logon information and you can then do with that what you will.  I turned my list into an observable collection and bound it into a XAML form. One important note, you need to add a reference to ActiveDs Type Library in the COM section of the world in references to get to LargeInteger. Here’s the class: namespace Veracity.Utilities { using System; using System.Collections.Generic; using System.DirectoryServices; using ActiveDs; using log4net; /// <summary> /// Finds users inside of the active directory system. /// </summary> public class UserFinder { /// <summary> /// Creates the default logger /// </summary> private static readonly ILog log = LogManager.GetLogger(typeof(UserFinder)); /// <summary> /// Finds last logon information /// </summary> /// <param name="domain">The domain to search.</param> /// <param name="userName">The username for the query.</param> /// <param name="password">The password for the query.</param> /// <returns>A list of users with their last logon information.</returns> public IList<UserLoginInformation> GetLastLogonInformation(string domain, string userName, string password) { IList<UserLoginInformation> result = new List<UserLoginInformation>(); DirectoryEntry entry = new DirectoryEntry(domain, userName, password, AuthenticationTypes.Secure); DirectorySearcher directorySearcher = new DirectorySearcher(entry); directorySearcher.PropertyNamesOnly = true; directorySearcher.PropertiesToLoad.Add("name"); directorySearcher.PropertiesToLoad.Add("lastLogonTimeStamp"); SearchResultCollection searchResults; try { searchResults = directorySearcher.FindAll(); } catch (System.Exception ex) { log.Error("Failed to do a find all.", ex); throw; } try { foreach (SearchResult searchResult in searchResults) { DirectoryEntry resultEntry = searchResult.GetDirectoryEntry(); if (resultEntry.SchemaClassName == "user") { UserLoginInformation logon = new UserLoginInformation(); logon.Name = resultEntry.Name; PropertyValueCollection timeStampObject = resultEntry.Properties["lastLogonTimeStamp"]; if (timeStampObject.Count > 0) { IADsLargeInteger logonTimeStamp = (IADsLargeInteger)timeStampObject[0]; long lastLogon = (long)((uint)logonTimeStamp.LowPart + (((long)logonTimeStamp.HighPart) << 32)); logon.LastLogonTime = DateTime.FromFileTime(lastLogon); } result.Add(logon); } } } catch (System.Exception ex) { log.Error("Failed to iterate search results.", ex); throw; } return result; } } } Some important things to note: Username and Password can be set to null and if your computer us part of the domain, this may still work. Domain should be set to something like LDAP://servername/CN=Users,CN=Domain,CN=com You’re actually getting a com object back, so that’s why the LongInteger conversions are happening.  The class for UserLoginInformation looks like this:   namespace Veracity.Utilities { using System; /// <summary> /// Represents user login information. /// </summary> public class UserLoginInformation { /// <summary> /// Gets or sets Name /// </summary> public string Name { get; set; } /// <summary> /// Gets or sets LastLogonTime /// </summary> public DateTime LastLogonTime { get; set; } /// <summary> /// Gets the age of the account. /// </summary> public TimeSpan AccountAge { get { TimeSpan result = TimeSpan.Zero; if (this.LastLogonTime != DateTime.MinValue) { result = DateTime.Now.Subtract(this.LastLogonTime); } return result; } } } } I hope this is useful and instructive. Technorati Tags: Active Directory

    Read the article

  • JavaScript Intellisense with Telerik in ASP.NET Master Page Project with VS 2010

    - by Otto Neff
    Today I was looking for a solution to get finally the JScript/Javascript/jQuery Intellisense Featureworking with my ASP.Net Webform Project to work. I found some good articles: - JScript IntelliSense Overview- JScript IntelliSense: A Reference for the “Reference” Tag- Enabling JavaScript intellisense in VS.NET 2010 to work with SharePoint 2010- Rich IntelliSense for jQueryBUT, all of suggested solutions did not work right with my Master Page based Visual Studio 2010 Solution.Only with physical Javascript Files (Telerik includes certain Javascript Files like jQuery as Ressource) or/andconfigure always a new ASP.NET Scriptmanager / RadScriptManager on every page derived from the Master Page, wasn't exactly what I was looking for. So I came up with the following simple Solution, to Trick VS2010and still have the Project running with multiple runat="server" Scriptmanagers. In short:- New ASP.NET control derived from ScriptManager with emtpy overwritten OnInit() to use it as emtpy wrapper for VS2010. In detail:New RadScriptManager Classusing System; using System.Collections.Generic; using System.Linq; using System.Web; using Telerik.Web.UI; namespace IntellisenseJavascript.Controls { public class IntelliJS : RadScriptManager { protected override void OnInit(EventArgs e) { } protected override void OnPreRender(EventArgs e) { } protected override void OnLoad(EventArgs e) { } protected override void Render(System.Web.UI.HtmlTextWriter writer) { } public override void RenderControl(System.Web.UI.HtmlTextWriter writer) { } } } web.config<configuration> ... <system.web> ... <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4"/> <add tagPrefix="VSFix" namespace="IntellisenseJavascript.Controls" assembly="IntellisenseJavascript"/> </controls> </pages> ... Master Page<%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Site.master.cs" Inherits="IntellisenseJavascript.Site" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head id="head" runat="server"> <title></title> <telerik:RadStyleSheetManager ID="radStyleSheetManager" runat="server" /> </head> <body> <form id="form" runat="server"> <telerik:RadScriptManager ID="radScriptManager" runat="server"> <Scripts> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.Core.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQuery.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQueryInclude.js" /> </Scripts> </telerik:RadScriptManager> <telerik:RadAjaxManager ID="radAjaxManager" runat="server"> </telerik:RadAjaxManager> <div> #MASTER CONTENT# <asp:ContentPlaceHolder ID="contentPlaceHolder" runat="server"> </asp:ContentPlaceHolder> </div> </form> <script type="text/javascript"> $(function () { // Masterpage ready $('body').css('margin', '50px'); }); </script> </body> </html> ASPX Page<%@ Page Title="" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="IntellisenseJavascript.Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="contentPlaceHolder" runat="server"> <VSFix:IntelliJS runat="server" ID="intelliJS"> <Scripts> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.Core.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQuery.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQueryInclude.js" /> </Scripts> </VSFix:IntelliJS> <div style="border: 5px solid #FF9900;"> #PAGE CONTENT# </div> <script type="text/javascript"> $(function () { // Page ready $('body').css('border', '5px solid #888'); }); </script> </asp:Content> The Result I know, this is not the way it meant to be... but now at least you can have a Main ScriptManager for all Common Scripts and Settings, inject page specific Javascripts in PageLoad Event in normal ASPX Files and have JavaScript Intellisense for defined Scripts from JS Files or Assembly Ressouce in your Content Maybe, vNext will fix this.

    Read the article

  • Error when trying to compile abgx360: C++ compiler cannot create executables

    - by era878
    I am trying to compile the abgx360 GUI. First I run home/eric/Desktop/abgx360-1.0.5/configure but I recieve this error: checking for C++ compiler default output file name... configure: error: C++ compiler cannot create executables See `config.log' for more details. Then i run make but I recieve this error: make: * No rule to make target `/home/eric/Desktop/abgx360-1.0.5/Makefile.am', needed by `/home/eric/Desktop/abgx360-1.0.5/Makefile.in'. Stop. Here is my 'config.log': This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by abgx360gui configure 1.0.2, which was generated by GNU Autoconf 2.61. Invocation command line was $ /home/eric/Desktop/abgx360gui-1.0.2/configure ## --------- ## ## Platform. ## ## --------- ## hostname = Eric-Desktop uname -m = x86_64 uname -r = 2.6.35-27-generic uname -s = Linux uname -v = #48-Ubuntu SMP Tue Feb 22 20:25:46 UTC 2011 /usr/bin/uname -p = unknown /bin/uname -X = unknown /bin/arch = unknown /usr/bin/arch -k = unknown /usr/convex/getsysinfo = unknown /usr/bin/hostinfo = unknown /bin/machine = unknown /usr/bin/oslevel = unknown /bin/universe = unknown PATH: /usr/local/sbin PATH: /usr/local/bin PATH: /usr/sbin PATH: /usr/bin PATH: /sbin PATH: /bin PATH: /usr/games ## ----------- ## ## Core tests. ## ## ----------- ## configure:1800: checking for a BSD-compatible install configure:1856: result: /usr/bin/install -c configure:1867: checking whether build environment is sane configure:1910: result: yes configure:1938: checking for a thread-safe mkdir -p configure:1977: result: /bin/mkdir -p configure:1990: checking for gawk configure:2020: result: no configure:1990: checking for mawk configure:2006: found /usr/bin/mawk configure:2017: result: mawk configure:2028: checking whether make sets $(MAKE) configure:2049: result: yes configure:2302: checking for g++ configure:2332: result: no configure:2302: checking for c++ configure:2332: result: no configure:2302: checking for gpp configure:2332: result: no configure:2302: checking for aCC configure:2332: result: no configure:2302: checking for CC configure:2332: result: no configure:2302: checking for cxx configure:2332: result: no configure:2302: checking for cc++ configure:2332: result: no configure:2302: checking for cl.exe configure:2332: result: no configure:2302: checking for FCC configure:2332: result: no configure:2302: checking for KCC configure:2332: result: no configure:2302: checking for RCC configure:2332: result: no configure:2302: checking for xlC_r configure:2332: result: no configure:2302: checking for xlC configure:2332: result: no configure:2360: checking for C++ compiler version configure:2367: g++ --version >&5 /home/eric/Desktop/abgx360gui-1.0.2/configure: line 2368: g++: command not found configure:2370: $? = 127 configure:2377: g++ -v >&5 /home/eric/Desktop/abgx360gui-1.0.2/configure: line 2378: g++: command not found configure:2380: $? = 127 configure:2387: g++ -V >&5 /home/eric/Desktop/abgx360gui-1.0.2/configure: line 2388: g++: command not found configure:2390: $? = 127 configure:2413: checking for C++ compiler default output file name configure:2440: g++ conftest.cpp >&5 /home/eric/Desktop/abgx360gui-1.0.2/configure: line 2441: g++: command not found configure:2443: $? = 127 configure:2481: result: configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "abgx360gui" | #define PACKAGE_TARNAME "abgx360gui" | #define PACKAGE_VERSION "1.0.2" | #define PACKAGE_STRING "abgx360gui 1.0.2" | #define PACKAGE_BUGREPORT "" | #define PACKAGE "abgx360gui" | #define VERSION "1.0.2" | /* end confdefs.h. */ | | int | main () | { | | ; | return 0; | } configure:2488: error: C++ compiler cannot create executables See `config.log' for more details. ## ---------------- ## ## Cache variables. ## ## ---------------- ## ac_cv_env_CCC_set= ac_cv_env_CCC_value= ac_cv_env_CC_set= ac_cv_env_CC_value= ac_cv_env_CFLAGS_set= ac_cv_env_CFLAGS_value= ac_cv_env_CPPFLAGS_set= ac_cv_env_CPPFLAGS_value= ac_cv_env_CPP_set= ac_cv_env_CPP_value= ac_cv_env_CXXFLAGS_set= ac_cv_env_CXXFLAGS_value= ac_cv_env_CXX_set= ac_cv_env_CXX_value= ac_cv_env_LDFLAGS_set= ac_cv_env_LDFLAGS_value= ac_cv_env_LIBS_set= ac_cv_env_LIBS_value= ac_cv_env_build_alias_set= ac_cv_env_build_alias_value= ac_cv_env_host_alias_set= ac_cv_env_host_alias_value= ac_cv_env_target_alias_set= ac_cv_env_target_alias_value= ac_cv_path_install='/usr/bin/install -c' ac_cv_path_mkdir=/bin/mkdir ac_cv_prog_AWK=mawk ac_cv_prog_make_make_set=yes ## ----------------- ## ## Output variables. ## ## ----------------- ## ACLOCAL='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run aclocal-1.10' AMDEPBACKSLASH='' AMDEP_FALSE='' AMDEP_TRUE='' AMTAR='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run tar' AUTOCONF='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run autoconf' AUTOHEADER='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run autoheader' AUTOMAKE='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run automake-1.10' AWK='mawk' CC='' CCDEPMODE='' CFLAGS='' CPP='' CPPFLAGS='' CXX='g++' CXXDEPMODE='' CXXFLAGS='' CYGPATH_W='echo' DEFS='' DEPDIR='' ECHO_C='' ECHO_N='-n' ECHO_T='' EGREP='' EXEEXT='' GREP='' INSTALL_DATA='${INSTALL} -m 644' INSTALL_PROGRAM='${INSTALL}' INSTALL_SCRIPT='${INSTALL}' INSTALL_STRIP_PROGRAM='$(install_sh) -c -s' LDFLAGS='' LIBOBJS='' LIBS='' LTLIBOBJS='' MAKEINFO='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run makeinfo' OBJEXT='' PACKAGE='abgx360gui' PACKAGE_BUGREPORT='' PACKAGE_NAME='abgx360gui' PACKAGE_STRING='abgx360gui 1.0.2' PACKAGE_TARNAME='abgx360gui' PACKAGE_VERSION='1.0.2' PATH_SEPARATOR=':' SET_MAKE='' SHELL='/bin/bash' STRIP='' VERSION='1.0.2' WX_CFLAGS='' WX_CFLAGS_ONLY='' WX_CONFIG_PATH='' WX_CPPFLAGS='' WX_CXXFLAGS='' WX_CXXFLAGS_ONLY='' WX_LIBS='' WX_LIBS_STATIC='' WX_RESCOMP='' WX_VERSION='' ac_ct_CC='' ac_ct_CXX='' am__fastdepCC_FALSE='' am__fastdepCC_TRUE='' am__fastdepCXX_FALSE='' am__fastdepCXX_TRUE='' am__include='' am__isrc=' -I$(srcdir)' am__leading_dot='.' am__quote='' am__tar='${AMTAR} chof - "$$tardir"' am__untar='${AMTAR} xf -' bindir='${exec_prefix}/bin' build_alias='' datadir='${datarootdir}' datarootdir='${prefix}/share' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' dvidir='${docdir}' exec_prefix='NONE' host_alias='' htmldir='${docdir}' includedir='${prefix}/include' infodir='${datarootdir}/info' install_sh='$(SHELL) /home/eric/Desktop/abgx360gui-1.0.2/install-sh' libdir='${exec_prefix}/lib' libexecdir='${exec_prefix}/libexec' localedir='${datarootdir}/locale' localstatedir='${prefix}/var' mandir='${datarootdir}/man' mkdir_p='/bin/mkdir -p' oldincludedir='/usr/include' pdfdir='${docdir}' prefix='NONE' program_transform_name='s,x,x,' psdir='${docdir}' sbindir='${exec_prefix}/sbin' sharedstatedir='${prefix}/com' sysconfdir='${prefix}/etc' target_alias='' ## ----------- ## ## confdefs.h. ## ## ----------- ## #define PACKAGE_NAME "abgx360gui" #define PACKAGE_TARNAME "abgx360gui" #define PACKAGE_VERSION "1.0.2" #define PACKAGE_STRING "abgx360gui 1.0.2" #define PACKAGE_BUGREPORT "" #define PACKAGE "abgx360gui" #define VERSION "1.0.2" configure: exit 77

    Read the article

  • Top Tweets SOA Partner Community – November 2011

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity soacommunity SOA Community Dutch ACEs SOA Partner Community award celebration wp.me/p10C8u-i9 OracleBPM Gauging Maturity of your BPM Strategy – part 1/2, bit.ly/vJE9UZ MagicChatzi Dutch ACE’s and ACE Directors had a small party: achatzia.blogspot.com/2011/11/celebr… leonsmiers #Capgemini #Oracle #BPM Blog index bit.ly/tUYtvD #yam lucasjellema Blog post by my colleague Emiel on the AMIS blog: Timeouts in Oracle SOA Suite 11g – tinyurl.com/73amo3r biemond Solving __OAUX_GENXSD_.TOP.XSD with BPEL: When you use an external web service in combination with a BPEL servic… t.co/Gzzatzrr OracleBlogs Jumpstart Fusion Middleware projects with Oracle User Productivity Kit ow.ly/1fJMev cpurdy on Oracle Coherence data grid, its new RESTful APIs, and Oracle Service Bus (OSB): blogs.oracle.com/slc/entry/orac… Accenture Learn how Service-Oriented Architecture can help public service agencies solve legacy system issues. bit.ly/sTteM4 #SOA eelzinga Thanks for organising it Andreas! #soacommunity eelzinga Had a nice drink with the fellow Dutch Oracle ACE members for a little celebration of the SOA Community Partner Award. #soacommunity EmielP Wrote a blogpost about timeouts in the #Oracle #SOA Suite: bit.ly/uhUcrX OracleBlogs Processing Binary Data in SOA Suite 11g t.co/Tzd1xBsY OracleBlogs Finding the Value in SOA by Stephen Bennett t.co/9MMLJoLz OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/5viNj8ib OracleBlogs Demo: Business Transaction Management with SOA Management Pack ow.ly/1fFBv3 OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/Dnfzo0PN oracletechnet Wikis.oracle.com lives leonsmiers A new #capgemini #oracle #blog, Measuring the Human Task activity in Oracle BPM bit.ly/uPan08 #yam @CapgeminiOracle OTNArchBeat 3 SOA business cases, explained in a 2-minute elevator speech | @JoeMcKendrick t.co/aYGNkZup OTNArchBeat Gartner, Inc. places Oracle SOA Governance in Magic Quadrant for SOA Governance Technologies t.co/bSG5cuTr Jphjulstad Red carpet to Oracle BPM – evita.no evita.no/ikbViewer/soa-… Oracle #Oracle Named a Leader in #SOA Governance Magic Quadrant by Leading Analyst Firm t.co/prnyGu2U soacommunity What presentations & topics do you like to see at the next SOA & BPM & Webcenter Community Forum early 2012? #soacommunity soacommunity Oracle BPM Suite 11g Handbook Released wp.me/p10C8u-hU OTNArchBeat SOA Development Virtual Developer Day (On Demand) | @soacommunity bit.ly/sqhQmX OracleBlogs SOA Development Virtual Developer Day (On Demand) t.co/MDrdnx0h 9 Nov Favorite Undo Retweet Reply OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 biemond @stevendavelaar this is for you t.co/hInKCcfY it explains your sso problem soacommunity SOA Development Virtual Developer Day (on demand) t.co/flXPWk4R soacommunity IPT Swiss SOA Experts – thanks for the nice ink wp.me/p10C8u-i3 soacommunity Enjoy #wjax specially the presentations from our #ACE @t_winterberg @myfear @AdamBien pic.twitter.com/m8VcBSG3 OTNArchBeat Discounts on books, more, for Oracle Technology Network members bit.ly/vRxMfB OracleSOA Justify the ROI of SOA in 10 seconds…a pic is worth 1000 words bit.ly/roi_of_soa_img #oraclesoa #soa #oow11 orclateamsoa A-Team SOA Blog: Case Management in BPM 11g -  Mark Foster Oracle BPM 11g & Case Management I’ve seen… t.co/l5zb6pFr t_winterberg Die nächste SIG #SOA steht an: 7.12. in Hamburg. Neues Tooling und Erfahrungen rund um Oracle FMW, SOA, BPM… (cont) deck.ly/~YC57v OracleBlogs Continuous Integration for SOA/BPM ow.ly/1fsekI OracleBlogs BPM Suite 11g Handbook Released ow.ly/1frlzv lucasjellema Iterating over collection (array) in BPM (and dispatching jobs for entries in array): t.co/1SEhSvWv – subprocesses are the key. lucasjellema Lucas Jellema Useful tip from Mark Nelson: BPM API documentation (as well as Human Workflow Service) available: redstack.wordpress.com/2011/09/28/api… OTNArchBeat SOA, cloud: it’s the architecture that matters | Joe McKendrick zd.net/tNCiTF orclateamsoa: Building a job dispatcher in BPM -or- Iterating over collections in BPM ow.ly/1frbrz orclateamsoa Using the Database as a Policy Store for SOA 11g ow.ly/1frbrA OracleBPM Oracle launches Process Accelerators for BPM: t.co/XPEE61QL Jphjulstad Human-Centric BPM Selection Checklist t.co/3TZXZHLH OracleBlogs Fusion Middleware General Session at OOW 2011: Missed It? Read On… t.co/aU5JvM6K gschmutz Great! The product page of the OSB 11g Development Cookbook is now online: t.co/5Jfbe6Ng Looking forward to get it, u too? brhubart Oracle IT Architecture Essentials; Lightweight Composite Service Development with SCA and Spring; Cloud Migration ow.ly/7esNg eelzinga New blogpost : Oracle Service Bus, Generic fault handling, bit.ly/sGr4UL #osb #oracleservicebus For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • My History with Agile

    - by Robert May
    I’m going to write my history with Agile here.  That way, in future posts, I can refer back to it, instead of typing it out in the post that contains information you may actually want to read.  Note that I’m actually a pretty senior developer, and do lots of technical interviews.  I’m an Agile fan because of the difference it makes in peoples lives and the improvement in quality it brings, and I’ll sacrifice my technological advance to help teams. Management History I started management pretty early in my career, starting with the first job that I ever had.  I actually do NOT have a CS or similar degree.  I have a Bachelor’s of Business Administration with an emphasis in Computer Information Systems. My first management gigs were around call center work and were very schedule oriented.  I didn’t understand the true value of teams, and I’m ashamed to admit, I actually installed a fingerprint scanner as a time clock in this job.  I shudder to think of the impact that I had on the team spirit.  I didn’t even trust them enough to fill out their time cards correctly.  How sad. I was managing nearly 100 people in this position, with the help of a great set of subordinates. I did try to come up with reward programs for the team, but again, didn’t understand the concept of team, so instead of letting the team determine how the rewards should work, I mandated from on high, which isn’t a good thing. I was told that I wasn’t the type that would be a good manager by people whom I respected a lot.  They said it because I was a computer geek, since they don’t understand good management either, but in retrospect, they were right about me then.  I was too green. After my first job, I went on to other jobs and with the exception of one job, I’ve managed people at them all.  The rest of the management story is important for understanding agile, so I’ll save it for my next post. Technical History I’ve been in software development for many, many years.  I technically started programming on a commodore 64 in basic.  I didn’t know that I was programming, but I was sure having fun.  That was followed by batch files, Gorilla hacking (I always had to win), WordPerfect Macro programming and other things that taught me the basics. My first “real” job was with a telephone company, and that’s where I made my first database application in DataEase, wrote my first VBA app and started using real programming tools, like turbo pascal, vb3-vb5, and semi-real tools like RPG and VisualRPG.  I wrote my first web page in 1994, and built my first data driven web page in 1995 using perlDB.  You really can do anything with Perl.  At this time, I also started a Linux based internet service provider that is still in operation today.  One of the people I worked with is now a Microsoft employee building and designing frameworks you probably know well.  Smart guy.  I also built my first ASP applications connecting to Sql Server 6.5, setup Exchange 5.5 for the company, and many other system administration stuff.  I’m a programmer by choice, mostly because I don’t really like PC support. From there, I went on to a large state agency.  I got to see and maintain true waterfall projects.  5 years of maintaining the 200 VB COM+ (MTS, actually) dlls that were used to calculate a single number is a long time.  That was all Microsoft DNS technologies.  SQL Server and VB6 were the tools of choice, although .net started to be a factor near the end of employment.  I did some heavy XML work at this job and even wrote an XSD parser and validator in VB6 that was a shim until MSXML 3.0 came out.  Prior to 3.0, XSD’s weren’t supported, and I didn’t want to write DTDs. Ironically, jobs after this were more generic.  I pretty much settled in on the .net framework and revisions of it.  Lots of WPF, some silverlight, lots of ASP.NET, some SQL Azure, lots of SQL Server, some Oracle, but I don’t think that I was as passionate about development and technologies.  I was more into the management of development.  I like people. Technorati Tags: Agile,history

    Read the article

  • Wireless cuts out on Toshiba Satellite S7208

    - by alecRN
    I recently got a Toshiba Satellite L875-S7208 with Windows 7 preinstalled. I installed Ubuntu 12.04 LTS dual boot to the same Windows partition. However, usually 15 minutes or less after booting, the wifi connection dies. Here's some hopefully relevant information: lspci -knn 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) Subsystem: Toshiba America Info Systems Device [1179:fb40] Kernel driver in use: i915 Kernel modules: i915 00:14.0 USB controller [0c03]: Intel Corporation Panther Point USB xHCI Host Controller [8086:1e31] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: xhci_hcd 00:16.0 Communication controller [0780]: Intel Corporation Panther Point MEI Controller #1 [8086:1e3a] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #2 [8086:1e2d] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: ehci_hcd 00:1b.0 Audio device [0403]: Intel Corporation Panther Point High Definition Audio Controller [8086:1e20] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb40] Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 1 [8086:1e10] (rev c4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 2 [8086:1e12] (rev c4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.2 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 3 [8086:1e14] (rev c4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #1 [8086:1e26] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge [0601]: Intel Corporation Panther Point LPC Controller [8086:1e59] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel modules: iTCO_wdt 00:1f.2 SATA controller [0106]: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] [8086:1e03] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: ahci 00:1f.3 SMBus [0c05]: Intel Corporation Panther Point SMBus Controller [8086:1e22] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel modules: i2c-i801 02:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) Subsystem: Realtek Semiconductor Co., Ltd. Device [10ec:8211] Kernel driver in use: rtl8192ce Kernel modules: rtl8192ce 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) Subsystem: Toshiba America Info Systems Device [1179:fb37] Kernel driver in use: r8169 Kernel modules: r8169 lsmod Module Size Used by snd_hda_codec_hdmi 32474 1 snd_hda_codec_realtek 224066 1 joydev 17693 0 rfcomm 47604 0 bnep 18281 2 bluetooth 180104 10 rfcomm,bnep parport_pc 32866 0 ppdev 17113 0 arc4 12529 2 snd_hda_intel 33773 3 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq psmouse 87692 0 serio_raw 13211 0 rtl8192ce 84826 0 rtl8192c_common 75767 1 rtl8192ce rtlwifi 111202 1 rtl8192ce mac80211 506816 3 rtl8192ce,rtl8192c_common,rtlwifi snd 78855 16 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device sparse_keymap 13890 0 uvcvideo 72627 0 videodev 98259 1 uvcvideo v4l2_compat_ioctl32 17128 1 videodev mac_hid 13253 0 mei 41616 0 wmi 19256 0 soundcore 15091 1 snd i915 472941 3 snd_page_alloc 18529 2 snd_hda_intel,snd_pcm drm_kms_helper 46978 1 i915 cfg80211 205544 2 rtlwifi,mac80211 drm 242038 4 i915,drm_kms_helper i2c_algo_bit 13423 1 i915 video 19596 1 i915 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62099 0 ums_realtek 18248 0 uas 18180 0 usb_storage 49198 1 ums_realtek dmesg | grep firmware [ 15.692951] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 16.240881] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 452.419288] rtl8192c_common:rtl92c_firmware_selfreset(): 8051 reset fail. [ 458.572211] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 465.440640] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 472.337617] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 479.175471] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 485.978582] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 492.764893] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 499.579348] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 506.386934] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 513.209545] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 519.991365] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 526.778375] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 533.629695] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 540.426004] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 547.238125] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 554.024434] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 560.854794] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 567.678160] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 574.494666] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 581.336653] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 588.157710] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 595.221122] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 602.047429] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 608.829534] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 615.639079] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 622.454991] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 629.273231] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 636.056613] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 642.858096] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 649.640753] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 657.184094] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 664.008018] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 670.838639] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 677.675418] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 684.507255] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 691.310994] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 698.095325] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 704.914509] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 711.725178] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin uname -r 3.2.0-29-generic ifconfig eth0 Link encap:Ethernet HWaddr 4c:72:b9:59:6c:61 inet addr:192.168.0.11 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::4e72:b9ff:fe59:6c61/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4447 errors:0 dropped:0 overruns:0 frame:0 TX packets:2762 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3671147 (3.6 MB) TX bytes:335133 (335.1 KB) Interrupt:42 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:515 errors:0 dropped:0 overruns:0 frame:0 TX packets:515 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:83153 (83.1 KB) TX bytes:83153 (83.1 KB) wlan0 Link encap:Ethernet HWaddr 74:e5:43:32:47:95 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:280 errors:0 dropped:0 overruns:0 frame:0 TX packets:51 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:32958 (32.9 KB) TX bytes:10431 (10.4 KB)

    Read the article

  • AWS: setting up auto-scale for EC2 instances

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/16/aws-setting-up-auto-scale-for-ec2-instances.aspxWith Amazon Web Services, there’s no direct equivalent to Azure Worker Roles – no Elastic Beanstalk-style application for .NET background workers. But you can get the auto-scale part by configuring an auto-scaling group for your EC2 instance. This is a step-by-step guide, that shows you how to create the auto-scaling configuration, which for EC2 you need to do with the command line, and then link your scaling policies to CloudWatch alarms in the Web console. I’m using queue size as my metric for CloudWatch,  which is a good fit if your background workers are pulling messages from a queue and processing them.  If the queue is getting too big, the “high” alarm will fire and spin up a new instance to share the workload. If the queue is draining down, the “low” alarm will fire and shut down one of the instances. To start with, you need to manually set up your app in an EC2 VM, for a background worker that would mean hosting your code in a Windows Service (I always use Topshelf). If you’re dual-running Azure and AWS, then you can isolate your logic in one library, with a generic entry point that has Start() and Stop()  functions, so your Worker Role and Windows Service are essentially using the same code. When you have your instance set up with the Windows Service running automatically, and you’ve tested it starts up and works properly from a reboot, shut the machine down and take an image of the VM, using Create Image (EBS AMI) from the Web Console: When that completes, you’ll have your own AMI which you can use to spin up new instances, and you’re ready to create your auto-scaling group. You need to dip into the command-line tools for this, so follow this guide to set up the AWS autoscale command line tool. Now we’re ready to go. 1. Create a launch configuration This launch configuration tells AWS what to do when a new instance needs to be spun up. You create it with the as-create-launch-config command, which looks like this: as-create-launch-config sc-xyz-launcher # name of the launch config --image-id ami-7b9e9f12 # id of the AMI you extracted from your VM --region eu-west-1 # which region the new instance gets created in --instance-type t1.micro # size of the instance to create --group quicklaunch-1 #security group for the new instance 2. Create an auto-scaling group The auto-scaling group links to the launch config, and defines the overall configuration of the collection of instances: as-create-auto-scaling-group sc-xyz-asg # auto-scaling group name --region eu-west-1 # region to create in --launch-configuration sc-xyz-launcher # name of the launch config to invoke for new instances --min-size 1 # minimum number of nodes in the group --max-size 5 # maximum number of nodes in the group --default-cooldown 300 # period to wait (in seconds) after each scaling event, before checking if another scaling event is required --availability-zones eu-west-1a eu-west-1b eu-west-1c # which availability zones you want your instances to be allocated in – multiple entries means EC@ will use any of them 3. Create a scale-up policy The policy dictates what will happen in response to a scaling event being triggered from a “high” alarm being breached. It links to the auto-scaling group; this sample results in one additional node being spun up: as-put-scaling-policy scale-up-policy # policy name -g sc-psod-woker-asg # auto-scaling group the policy works with --adjustment 1 # size of the adjustment --region eu-west-1 # region --type ChangeInCapacity # type of adjustment, this specifies a fixed number of nodes, but you can use PercentChangeInCapacity to make an adjustment relative to the current number of nodes, e.g. increasing by 50% 4. Create a scale-down policy The policy dictates what will happen in response to a scaling event being triggered from a “low” alarm being breached. It links to the auto-scaling group; this sample results in one node from the group being taken offline: as-put-scaling-policy scale-down-policy -g sc-psod-woker-asg "--adjustment=-1" # in Windows, use double-quotes to surround a negative adjustment value –-type ChangeInCapacity --region eu-west-1 5. Create a “high” CloudWatch alarm We’re done with the command line now. In the Web Console, open up the CloudWatch view and create a new alarm. This alarm will monitor your metrics and invoke the scale-up policy from your auto-scaling group, when the group is working too hard. Configure your metric – this example will fire the alarm if there are more than 10 messages in my queue for over a minute: Then link the alarm to the scale-up policy in your group: 6. Create a “low” CloudWatch alarm The opposite of step 4, this alarm will trigger when the instances in your group don’t have enough work to do (e.g fewer than 2 messages in the queue for 1 minute), and will invoke the scale-down policy. And that’s it. You don’t need your original VM as the auto-scale group has a minimum number of nodes connected. You can test out the scaling by flexing your CloudWatch metric – in this example, filling up a queue from a  stub publisher – and watching AWS create new nodes as required, then stopping the publisher and watch AWS kill off the spare nodes.

    Read the article

  • Using Lazy<T> and abstract wrapper class to lazy-load complex system parameters

    - by DigiMortal
    .NET Framework 4.0 introduced new class called Lazy<T> and I wrote blog post about it: .Net Framework 4.0: Using System.Lazy<T>. One thing is annoying for me – we have to keep lazy loaded value and its value loader as separate things. In this posting I will introduce you my Lazy<T> wrapper for complex to get system parameters that uses template method to keep lazy value loader in parameter class. Problem with original implementation Here’s the sample code that shows you how Lazy<T> is usually used. This is just sample code, don’t focus on the fact that this is dummy console application. class Program {     static void Main(string[] args)     {         var temperature = new Lazy<int>(LoadMinimalTemperature);           Console.WriteLine("Minimal room temperature: " + temperature.Value);         Console.ReadLine();     }       protected static int LoadMinimalTemperature()     {         var returnValue = 0;           // Do complex stuff here           return true;     } } The problem is that our class with many lazy loaded properties will grow messy if it has all value loading code inside it. This code may be complex for more than one parameter and in this case it is better to use separate class for this parameter. Defining base class for parameters As a first step I will define base class for all lazy-loaded parameters. This class is wrapper around Lazy<T> and it also offers one template method that parameter classes have to override to provide loaded data. public abstract class LazyParameter<T> {     private Lazy<T> _lazyParam;       public LazyParameter()     {         _lazyParam = new Lazy<T>(Load);     }       protected abstract T Load();       public T Value     {         get { return _lazyParam.Value; }     } } It is also possible to extend Lazy<T> but I don’t prefer to do it as Lazy<T> has six constructors we have to take care of. Also I don’t like to expose Lazy<T> public interface to users of my parameter classes. Creating parameter class Now it’s time to create our first parameter class. Notice how few stuff we have in this class besides overridden Load() method. public class MinimalRoomTemperature : LazyParameter<int> {     protected override int Load()     {         var returnValue = 0;           // Do complex stuff here           return returnValue;     } } Using parameter class is simple. Here’s my test code. class Program {     static void Main(string[] args)     {         var parameter = new MinimalRoomTemperature();         Console.WriteLine("Minimal room temperature: " + parameter.Value);         Console.ReadLine();     } } Conclusion Lazy<T> is useful class that you usually don’t want to use outside from API-s. I like this class but I don’t like when people are using this class directly in application code. In this posting I showed you how to use Lazy<T> with wrapper class to get complex parameter loading code out from classes that use this parameter. We ended up with generic base class for parameters that you can also use as base for other similar classes (you have to find better name to base class in this case).

    Read the article

  • TFS API-Process Template currently applied to the Team Project

    - by Tarun Arora
    Download Demo Solution - here In this blog post I’ll show you how to use the TFS API to get the name of the Process Template that is currently applied to the Team Project. You can also download the demo solution attached, I’ve tested this solution against TFS 2010 and TFS 2011.    1. Connecting to TFS Programmatically I have a blog post that shows you from where to download the VS 2010 SP1 SDK and how to connect to TFS programmatically. private TfsTeamProjectCollection _tfs; private string _selectedTeamProject;   TeamProjectPicker tfsPP = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPP.ShowDialog(); this._tfs = tfsPP.SelectedTeamProjectCollection; this._selectedTeamProject = tfsPP.SelectedProjects[0].Name; 2. Programmatically get the Process Template details of the selected Team Project I’ll be making use of the VersionControlServer service to get the Team Project details and the ICommonStructureService to get the Project Properties. private ProjectProperty[] GetProcessTemplateDetailsForTheSelectedProject() { var vcs = _tfs.GetService<VersionControlServer>(); var ics = _tfs.GetService<ICommonStructureService>(); ProjectProperty[] ProjectProperties = null; var p = vcs.GetTeamProject(_selectedTeamProject); string ProjectName = string.Empty; string ProjectState = String.Empty; int templateId = 0; ProjectProperties = null; ics.GetProjectProperties(p.ArtifactUri.AbsoluteUri, out ProjectName, out ProjectState, out templateId, out ProjectProperties); return ProjectProperties; } 3. What’s the catch? The ProjectProperties will contain a property “Process Template” which as a value has the name of the process template. So, you will be able to use the below line of code to get the name of the process template. var processTemplateName = processTemplateDetails.Where(pt => pt.Name == "Process Template").Select(pt => pt.Value).FirstOrDefault();   However, if the process template does not contain the property “Process Template” then you will need to add it. So, the question becomes how do i add the Name property to the Process Template. Download the Process Template from the Process Template Manager on your local        Once you have downloaded the Process Template to your local machine, navigate to the Classification folder with in the template       From the classification folder open Classification.xml        Add a new property <property name=”Process Template” value=”MSF for CMMI Process Improvement v5.0” />           4. Putting it all together… using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.VersionControl.Client; using Microsoft.TeamFoundation.Server; using System.Diagnostics; using Microsoft.TeamFoundation.WorkItemTracking.Client; namespace TfsAPIDemoProcessTemplate { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private TfsTeamProjectCollection _tfs; private string _selectedTeamProject; private void btnConnect_Click(object sender, EventArgs e) { TeamProjectPicker tfsPP = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPP.ShowDialog(); this._tfs = tfsPP.SelectedTeamProjectCollection; this._selectedTeamProject = tfsPP.SelectedProjects[0].Name; var processTemplateDetails = GetProcessTemplateDetailsForTheSelectedProject(); listBox1.Items.Clear(); listBox1.Items.Add(String.Format("Team Project Selected => '{0}'", _selectedTeamProject)); listBox1.Items.Add(Environment.NewLine); var processTemplateName = processTemplateDetails.Where(pt => pt.Name == "Process Template") .Select(pt => pt.Value).FirstOrDefault(); if (!string.IsNullOrEmpty(processTemplateName)) { listBox1.Items.Add(Environment.NewLine); listBox1.Items.Add(String.Format("Process Template Name: {0}", processTemplateName)); } else { listBox1.Items.Add(String.Format("The Process Template does not have the 'Name' property set up")); listBox1.Items.Add(String.Format("***TIP: Download the Process Template and in Classification.xml add a new property Name, update the template then you will be able to see the Process Template Name***")); listBox1.Items.Add(String.Format(" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -")); } } private ProjectProperty[] GetProcessTemplateDetailsForTheSelectedProject() { var vcs = _tfs.GetService<VersionControlServer>(); var ics = _tfs.GetService<ICommonStructureService>(); ProjectProperty[] ProjectProperties = null; var p = vcs.GetTeamProject(_selectedTeamProject); string ProjectName = string.Empty; string ProjectState = String.Empty; int templateId = 0; ProjectProperties = null; ics.GetProjectProperties(p.ArtifactUri.AbsoluteUri, out ProjectName, out ProjectState, out templateId, out ProjectProperties); return ProjectProperties; } } } Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Have you come across a better way of doing this, please share your experience here. Questions/Feedback/Suggestions, etc please leave a comment. Thank You! Share this post : CodeProject

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

    - by pinaldave
    CXPACKET has to be most popular one of all wait stats. I have commonly seen this wait stat as one of the top 5 wait stats in most of the systems with more than one CPU. Books On-Line: Occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if contention on this wait type becomes a problem. CXPACKET Explanation: When a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. There is an organizer/coordinator thread (thread 0), which takes waits for all the threads to complete and gathers result together to present on the client’s side. The organizer thread has to wait for the all the threads to finish before it can move ahead. The Wait by this organizer thread for slow threads to complete is called CXPACKET wait. Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query. Reducing CXPACKET wait: We cannot discuss about reducing the CXPACKET wait without talking about the server workload type. OLTP: On Pure OLTP system, where the transactions are smaller and queries are not long but very quick usually, set the “Maximum Degree of Parallelism” to 1 (one). This way it makes sure that the query never goes for parallelism and does not incur more engine overhead. EXEC sys.sp_configure N'cost threshold for parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Data-warehousing / Reporting server: As queries will be running for long time, it is advised to set the “Maximum Degree of Parallelism” to 0 (zero). This way most of the queries will utilize the parallel processor, and long running queries get a boost in their performance due to multiple processors. EXEC sys.sp_configure N'cost threshold for parallelism', N'0' GO RECONFIGURE WITH OVERRIDE GO Mixed System (OLTP & OLAP): Here is the challenge. The right balance has to be found. I have taken a very simple approach. I set the “Maximum Degree of Parallelism” to 2, which means the query still uses parallelism but only on 2 CPUs. However, I keep the “Cost Threshold for Parallelism” very high. This way, not all the queries will qualify for parallelism but only the query with higher cost will go for parallelism. I have found this to work best for a system that has OLTP queries and also where the reporting server is set up. Here, I am setting ‘Cost Threshold for Parallelism’ to 25 values (which is just for illustration); you can choose any value, and you can find it out by experimenting with the system only. In the following script, I am setting the ‘Max Degree of Parallelism’ to 2, which indicates that the query that will have a higher cost (here, more than 25) will qualify for parallel query to run on 2 CPUs. This implies that regardless of the number of CPUs, the query will select any two CPUs to execute itself. EXEC sys.sp_configure N'cost threshold for parallelism', N'25' GO EXEC sys.sp_configure N'max degree of parallelism', N'2' GO RECONFIGURE WITH OVERRIDE GO Read all the post in the Wait Types and Queue series. Additionally a must read comment of Jonathan Kehayias. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest you all to read the online book for further clarification. All the discussion of Wait Stats over here is generic and it varies from system to system. It is recommended that you test this on the development server before implementing on the production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Dynamic Data Connections

    - by Tim Dexter
    I have had a long running email thread running between Dan and David over at Valspar and myself. They have built some impressive connectivity between their in house apps and BIP using web services. The crux of their problem has been that they have multiple databases that need the same report executed against them. Not such an unusual request as I have spoken to two customers in the last month with the same situation. Of course, you could create a report against each data connection and just run or call the appropriate report. Not too bad if you have two or three data connections but more than that and it becomes a maintenance nightmare having to update queries or layouts. Ideally you want to have just a single report definition on the BIP server and to dynamically set the connection to be used at runtime based on the user or system that the user is in. A quick bit of digging and help from Shinji on the development team and I had an answer. Rather embarassingly, the solution has been around since the Oct 2010 rollup patch last year. Still, I grabbed the latest Jan 2011 patch - check out Note 797057.1 for the latest available patches. Once installed, I used the best web service testing tool I have yet to come across - SoapUI. Just point it at the WSDL and you can check out the available services and their parameters and then test them too. The XML packet has a new dynamic data source entry. You can set you own custom JDBC connection or just specify an existing data source name thats defined on the server. <pub:runReport> <pub:reportRequest> <pub:attributeFormat>xml</pub:attributeFormat> <pub:attributeTemplate>0</pub:attributeTemplate> <pub:byPassCache>true</pub:byPassCache> <pub:dynamicDataSource> <pub:JDBCDataSource> <pub:JDBCDriverClass></pub:JDBCDriverClass> <pub:JDBCDriverType></pub:JDBCDriverType> <pub:JDBCPassword></pub:JDBCPassword> <pub:JDBCURL></pub:JDBCURL> <pub:JDBCUserName></pub:JDBCUserName> <pub:dataSourceName>Conn1</pub:dataSourceName> </pub:JDBCDataSource> </pub:dynamicDataSource> <pub:reportAbsolutePath>/Test/Employee Report/Employee Report.xdo</pub:reportAbsolutePath> </pub:reportRequest> <pub:userID>Administrator</pub:userID> <pub:password>Administrator</pub:password> </pub:runReport> So I have Conn1 and Conn2 defined that are connections to different databases. I can just flip the name, make the WS call and get the appropriate dataset in my report. Just as an example, here's my web service call java code. Just a case of bringing in the BIP java libs to my java project. publicReportServiceService = new PublicReportServiceService(); PublicReportService publicReportService = publicReportServiceService.getPublicReportService_v11(); String userID = "Administrator"; String password = "Administrator"; ReportRequest rr = new ReportRequest(); rr.setAttributeFormat("xml"); rr.setAttributeTemplate("1"); rr.setByPassCache(true); rr.setReportAbsolutePath("/Test/Employee Report/Employee Report.xdo"); rr.setReportOutputPath("c:\\temp\\output.xml"); BIPDataSource bipds = new BIPDataSource(); JDBCDataSource jds = new JDBCDataSource(); jds.setDataSourceName("Conn1"); bipds.setJDBCDataSource(jds); rr.setDynamicDataSource(bipds); try { publicReportService.runReport(rr, userID, password); } catch (InvalidParametersException e) { e.printStackTrace(); } catch (AccessDeniedException e) { e.printStackTrace(); } catch (OperationFailedException e) { e.printStackTrace(); } } Note, Im no java whiz kid or whizzy old bloke, at least not unless Ive had a coffee. JDeveloper has a nice feature where you point it at the WSDL and it creates everything to support your calling code for you. Couple of things to remember: 1. When you call the service, remember to set the bypass the cache option. Forget it and much scratching of your head and taking my name in vain will ensue. 2. My demo actually hit the same database but used two users, one accessed the base tables another views with the same name. For far too long I thought the connection swapping was not working. I was getting the same results for both users until I realized I was specifying the schema name for the table/view in my query e.g. select * from EMP.EMPLOYEES. So remember to have a generic query that will depend entirely on the connection. Its a neat feature if you want to be able to switch connections and only define a single report and call it remotely. Now if you want the connection to be set dynamically based on the user and the report run via the user interface, thats going to be more tricky ... need to think about that one!

    Read the article

  • The Benefits of Smart Grid Business Software

    - by Sylvie MacKenzie, PMP
    Smart Grid Background What Are Smart Grids?Smart Grids use computer hardware and software, sensors, controls, and telecommunications equipment and services to: Link customers to information that helps them manage consumption and use electricity wisely. Enable customers to respond to utility notices in ways that help minimize the duration of overloads, bottlenecks, and outages. Provide utilities with information that helps them improve performance and control costs. What Is Driving Smart Grid Development? Environmental ImpactSmart Grid development is picking up speed because of the widespread interest in reducing the negative impact that energy use has on the environment. Smart Grids use technology to drive efficiencies in transmission, distribution, and consumption. As a result, utilities can serve customers’ power needs with fewer generating plants, fewer transmission and distribution assets,and lower overall generation. With the possible exception of wind farm sprawl, landscape preservation is one obvious benefit. And because most generation today results in greenhouse gas emissions, Smart Grids reduce air pollution and the potential for global climate change.Smart Grids also more easily accommodate the technical difficulties of integrating intermittent renewable resources like wind and solar into the grid, providing further greenhouse gas reductions. CostsThe ability to defer the cost of plant and grid expansion is a major benefit to both utilities and customers. Utilities do not need to use as many internal resources for traditional infrastructure project planning and management. Large T&D infrastructure expansion costs are not passed on to customers.Smart Grids will not eliminate capital expansion, of course. Transmission corridors to connect renewable generation with customers will require major near-term expenditures. Additionally, in the future, electricity to satisfy the needs of population growth and additional applications will exceed the capacity reductions available through the Smart Grid. At that point, expansion will resume—but with greater overall T&D efficiency based on demand response, load control, and many other Smart Grid technologies and business processes. Energy efficiency is a second area of Smart Grid cost saving of particular relevance to customers. The timely and detailed information Smart Grids provide encourages customers to limit waste, adopt energy-efficient building codes and standards, and invest in energy efficient appliances. Efficiency may or may not lower customer bills because customer efficiency savings may be offset by higher costs in generation fuels or carbon taxes. It is clear, however, that bills will be lower with efficiency than without it. Utility Operations Smart Grids can serve as the central focus of utility initiatives to improve business processes. Many utilities have long “wish lists” of projects and applications they would like to fund in order to improve customer service or ease staff’s burden of repetitious work, but they have difficulty cost-justifying the changes, especially in the short term. Adding Smart Grid benefits to the cost/benefit analysis frequently tips the scales in favor of the change and can also significantly reduce payback periods.Mobile workforce applications and asset management applications work together to deploy assets and then to maintain, repair, and replace them. Many additional benefits result—for instance, increased productivity and fuel savings from better routing. Similarly, customer portals that provide customers with near-real-time information can also encourage online payments, thus lowering billing costs. Utilities can and should include these cost and service improvements in the list of Smart Grid benefits. What Is Smart Grid Business Software? Smart Grid business software gathers data from a Smart Grid and uses it improve a utility’s business processes. Smart Grid business software also helps utilities provide relevant information to customers who can then use it to reduce their own consumption and improve their environmental profiles. Smart Grid Business Software Minimizes the Impact of Peak Demand Utilities must size their assets to accommodate their highest peak demand. The higher the peak rises above base demand: The more assets a utility must build that are used only for brief periods—an inefficient use of capital. The higher the utility’s risk profile rises given the uncertainties surrounding the time needed for permitting, building, and recouping costs. The higher the costs for utilities to purchase supply, because generators can charge more for contracts and spot supply during high-demand periods. Smart Grids enable a variety of programs that reduce peak demand, including: Time-of-use pricing and critical peak pricing—programs that charge customers more when they consume electricity during peak periods. Pilot projects indicate that these programs are successful in flattening peaks, thus ensuring better use of existing T&D and generation assets. Direct load control, which lets utilities reduce or eliminate electricity flow to customer equipment (such as air conditioners). Contracts govern the terms and conditions of these turn-offs. Indirect load control, which signals customers to reduce the use of on-premises equipment for contractually agreed-on time periods. Smart Grid business software enables utilities to impose penalties on customers who do not comply with their contracts. Smart Grids also help utilities manage peaks with existing assets by enabling: Real-time asset monitoring and control. In this application, advanced sensors safely enable dynamic capacity load limits, ensuring that all grid assets can be used to their maximum capacity during peak demand periods. Real-time asset monitoring and control applications also detect the location of excessive losses and pinpoint need for mitigation and asset replacements. As a result, utilities reduce outage risk and guard against excess capacity or “over-build”. Better peak demand analysis. As a result: Distribution planners can better size equipment (e.g. transformers) to avoid over-building. Operations engineers can identify and resolve bottlenecks and other inefficiencies that may cause or exacerbate peaks. As above, the result is a reduction in the tendency to over-build. Supply managers can more closely match procurement with delivery. As a result, they can fine-tune supply portfolios, reducing the tendency to over-contract for peak supply and reducing the need to resort to spot market purchases during high peaks. Smart Grids can help lower the cost of remaining peaks by: Standardizing interconnections for new distributed resources (such as electricity storage devices). Placing the interconnections where needed to support anticipated grid congestion. Smart Grid Business Software Lowers the Cost of Field Services By processing Smart Grid data through their business software, utilities can reduce such field costs as: Vegetation management. Smart Grids can pinpoint momentary interruptions and tree-caused outages. Spatial mash-up tools leverage GIS models of tree growth for targeted vegetation management. This reduces the cost of unnecessary tree trimming. Service vehicle fuel. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary “truck rolls.” Similarly, crews use far less fuel when Smart Grid sensors can pinpoint a problem and mobile workforce applications can then route them directly to it. Smart Grid Business Software Ensures Regulatory Compliance Smart Grids can ensure compliance with private contracts and with regional, national, or international requirements by: Monitoring fulfillment of contract terms. Utilities can use one-hour interval meters to ensure that interruptible (“non-core”) customers actually reduce or eliminate deliveries as required. They can use the information to levy fines against contract violators. Monitoring regulations imposed on customers, such as maximum use during specific time periods. Using accurate time-stamped event history derived from intelligent devices distributed throughout the smart grid to monitor and report reliability statistics and risk compliance. Automating business processes and activities that ensure compliance with security and reliability measures (e.g. NERC-CIP 2-9). Grid Business Software Strengthens Utilities’ Connection to Customers While Reducing Customer Service Costs During outages, Smart Grid business software can: Identify outages more quickly. Software uses sensors to pinpoint outages and nested outage locations. They also permit utilities to ensure outage resolution at every meter location. Size outages more accurately, permitting utilities to dispatch crews that have the skills needed, in appropriate numbers. Provide updates on outage location and expected duration. This information helps call centers inform customers about the timing of service restoration. Smart Grids also facilitates display of outage maps for customer and public-service use. Smart Grids can significantly reduce the cost to: Connect and disconnect customers. Meters capable of remote disconnect can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property or disconnect customers for nonpayment. Resolve reports of voltage fluctuation. Smart Grids gather and report voltage and power quality data from meters and grid sensors, enabling utilities to pinpoint reported problems or resolve them before customers complain. Detect and resolve non-technical losses (e.g. theft). Smart Grids can identify illegal attempts to reconnect meters or to use electricity in supposedly vacant premises. They can also detect theft by comparing flows through delivery assets with billed consumption. Smart Grids also facilitate outreach to customers. By monitoring and analyzing consumption over time, utilities can: Identify customers with unusually high usage and contact them before they receive a bill. They can also suggest conservation techniques that might help to limit consumption. This can head off “high bill” complaints to the contact center. Note that such “high usage” or “additional charges apply because you are out of range” notices—frequently via text messaging—are already common among mobile phone providers. Help customers identify appropriate bill payment alternatives (budget billing, prepayment, etc.). Help customers find and reduce causes of over-consumption. There’s no waiting for bills in the mail before they even understand there is a problem. Utilities benefit not just through improved customer relations but also through limiting the size of bills from customers who might struggle to pay them. Where permitted, Smart Grids can open the doors to such new utility service offerings as: Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents. Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart Grids permit utilities to alert owners or managers to a need for maintenance or replacement. Facilitating home and small-business networks. Smart Grids can provide a gateway to equipment networks that automate control or let owners access equipment remotely. They also facilitate net metering, offering some utilities a path toward involvement in small-scale solar or wind generation. Prepayment plans that do not need special meters. Smart Grid Business Software Helps Customers Control Energy Costs There is no end to the ways Smart Grids help both small and large customers control energy costs. For instance: Multi-premises customers appreciate having all meters read on the same day so that they can more easily compare consumption at various sites. Customers in competitive regions can match their consumption profile (detailed via Smart Grid data) with specific offerings from competitive suppliers. Customers seeing inexplicable consumption patterns and power quality problems may investigate further. The result can be discovery of electrical problems that can be resolved through rewiring or maintenance—before more serious fires or accidents happen. Smart Grid Business Software Facilitates Use of Renewables Generation from wind and solar resources is a popular alternative to fossil fuel generation, which emits greenhouse gases. Wind and solar generation may also increase energy security in regions that currently import fossil fuel for use in generation. Utilities face many technical issues as they attempt to integrate intermittent resource generation into traditional grids, which traditionally handle only fully dispatchable generation. Smart Grid business software helps solves many of these issues by: Detecting sudden drops in production from renewables-generated electricity (wind and solar) and automatically triggering electricity storage and smart appliance response to compensate as needed. Supporting industry-standard distributed generation interconnection processes to reduce interconnection costs and avoid adding renewable supplies to locations already subject to grid congestion. Facilitating modeling and monitoring of locally generated supply from renewables and thus helping to maximize their use. Increasing the efficiency of “net metering” (through which utilities can use electricity generated by customers) by: Providing data for analysis. Integrating the production and consumption aspects of customer accounts. During non-peak periods, such techniques enable utilities to increase the percent of renewable generation in their supply mix. During peak periods, Smart Grid business software controls circuit reconfiguration to maximize available capacity. Conclusion Utility missions are changing. Yesterday, they focused on delivery of reasonably priced energy and water. Tomorrow, their missions will expand to encompass sustainable use and environmental improvement.Smart Grids are key to helping utilities achieve this expanded mission. But they come at a relatively high price. Utilities will need to invest heavily in new hardware, software, business process development, and staff training. Customer investments in home area networks and smart appliances will be large. Learning to change the energy and water consumption habits of a lifetime could ultimately prove even more formidable tasks.Smart Grid business software can ease the cost and difficulties inherent in a needed transition to a more flexible, reliable, responsive electricity grid. Justifying its implementation, however, requires a full understanding of the benefits it brings—benefits that can ultimately help customers, utilities, communities, and the world address global issues like energy security and climate change while minimizing costs and maximizing customer convenience. This white paper is available for download here. For further information about Oracle's Primavera Solutions for Utilities, please read our Utilities e-book.

    Read the article

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • MySQL Binary Storage using BLOB VS OS File System: large files, large quantities, large problems.

    - by Quantico773
    Hi Guys, Versions I am running (basically latest of everything): PHP: 5.3.1 MySQL: 5.1.41 Apache: 2.2.14 OS: CentOS (latest) Here is the situation. I have thousands of very important documents, ranging from customer contracts to voice signatures (recordings of customer authorisation for contracts), with file types including, but not limited to jpg, gif, png, tiff, doc, docx, xls, wav, mp3, pdf, etc. All of these documents are currently stored on several servers including Windows 32 bit, CentOS and Mac, among others. Some files are also stored on employees desktop computers and laptops, and some are still hard copies stored in hundreds of boxes and filing cabinets. Now because customers or lawyers could demand evidence of contracts at any time, my company has to be able to search and locate the correct document(s) effectively, for this reason ALL of these files have to be digitised (if not already) and correlated into some sort of order for searching and accessing. As the programmer, I have created a full Customer Relations Management tool that the whole company uses. This includes Customer Profiles management, Order and job Tracking tools, Job/sale creation and management modules, etc, and at the moment any file that is needed at a customer profile level (drivers licence, credit authority, etc) or at a job/sale level (contracts, voice signatures, etc) can be uploaded to the server and sits in a parent/child hierarchy structure, just like Windows Explorer or any other typical file managment model. The structure appears as such: drivers_license |- DL_123.jpg voice_signatures |- VS_123.wav |- VS_4567.wav contracts So the files are uplaoded using PHP and Apache, and are stored in the file system of the OS. At the time of uploading, certain information about the file(s) is stored in a MySQL database. Some of the information stored is: TABLE: FileUploads FileID CustomerID (the customer id that the file belongs to, they all have this.) JobID/SaleID (the id of the job/sale associated, if any.) FileSize FileType UploadedDateTime UploadedBy FilePath (the directory path the file is stored in.) FileName (current file name of uploaded file, combination of CustomerID and JobID/SaleID if applicable.) FileDescription OriginalFileName (original name of the source file when uploaded, including extension.) So as you can see, the file is linked to the database by the File Name. When I want to provide a customers' files for download to a user all I have to do is "SELECT * FROM FileUploads WHERE CustomerID = 123 OR JobID = 2345;" and this will output all the file details I require, and with the FilePath and FileName I can provide the link for download. http... server / FilePath / FileName There are a number of problems with this method: Storing files in this "database unconcious" environment means data integrity is not kept. If a record is deleted, the file may not be deleted also, or vice versa. Files are strewn all over the place, different servers, computers, etc. The file name is the ONLY thing matching the binary to the database and customer profile and customer records. etc, etc. There are so many reasons, some of which are described here: http://www.dreamwerx.net/site/article01 . Also there is an interesting article here too: sietch.net/ViewNewsItem.aspx?NewsItemID=124 . SO, after much research I have pretty much decided I am going to store ALL of these files in the database, as a BLOB or LONGBLOB, but there are still many considerations before I do this. I know that storing them in the database is a viable option, however there are a number of methods of storing them. I also know storing them is one thing; correlating and accessing them in a manageable way is another thing entirely. The article provided at this link: dreamwerx.net/site/article01 describes a way of splitting the uploaded binary files into 64kb chunks and storing each chunk with the FileID, and then streaming the actual binary file to the client using headers. This is a really cool idea since it alleviates preassure on the servers memory; instead of loading an entire 100mb file into the RAM and then sending it to the client, it is doing it 64kb at a time. I have tried this (and updated his scripts) and this is totally successful, in a very small frame of testing. So if you are in agreeance that this method is a viable, stable and robust long-term option to store moderately large files (1kb to couple hundred megs), and large quantities of these files, let me know what other considerations or ideas you have. Also, I am considering getting a current "File Management" PHP script that gives an interface for managing files stored in the File System and converting it to manage files stored in the database. If there is already any software out there that does this, please let me know. I guess there are many questions I could ask, and all the information is up there ^^ so please, discuss all aspects of this and we can pass ideas back and forth and teach each other. Cheers, Quantico773

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • EHome IR receiver and Ubuntu 13 - any one have this working?

    - by squakie
    I have a "generic" USB IR receiver I purchased off of Ebay to make my life a little easier with XBMC on my Ubuntu box. I am currently running 13.10 and have never tried nor have any knowledge of IR in Ubuntu. I know of lirc, and I know a lot of it is now included in the kernel. My understanding is that lirc in basic terms maps pulses from an remote control to functions - like keyboard or mouse clicks. It is also my understanding that I might still need a driver or something for my device. lsusb shows the device as: Bus 006 Device 003: ID 147a:e016 Formosa Industrial Computing, Inc. eHome Infrared Receiver dmesg shows the following pertaining to the device: [43635.311985] usb 6-2: USB disconnect, device number 2 [43641.344387] usb 6-2: new full-speed USB device number 3 using ohci-pci [43641.543454] usb 6-2: New USB device found, idVendor=147a, idProduct=e016 [43641.543467] usb 6-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [43641.543473] usb 6-2: Product: eHome Infrared Transceiver [43641.543478] usb 6-2: Manufacturer: Formosa21 [43641.543483] usb 6-2: SerialNumber: FM000623 [43641.555736] Registered IR keymap rc-rc6-mce [43641.555968] input: Media Center Ed. eHome Infrared Remote Transceiver (147a:e016) as /devices/pci0000:00/0000:00:12.0/usb6/6-2/6-2:1.0/rc/rc2/input15 [43641.556221] rc2: Media Center Ed. eHome Infrared Remote Transceiver (147a:e016) as /devices/pci0000:00/0000:00:12.0/usb6/6-2/6-2:1.0/rc/rc2 [43641.556584] input: MCE IR Keyboard/Mouse (mceusb) as /devices/virtual/input/input16 [43641.557186] rc rc2: lirc_dev: driver ir-lirc-codec (mceusb) registered at minor = 0 [43641.731965] mceusb 6-2:1.0: Registered Formosa21 eHome Infrared Transceiver with mce emulator interface version 1 [43641.731978] mceusb 6-2:1.0: 2 tx ports (0x0 cabled) and 2 rx sensors (0x0 active) (excuse the double spacing, but I had to put in extra cr/lf using "enter" or the entire thing was just one long unreadable string). When I connect the same IR receiver to a Raspberry Pi running OpenELEC/XBMC there is no flashing led unless I press a remote button, and the device works. In Ubuntu, the led is constantly blinking, and nothing happens when I press a remote key. I tried the command line program to test but it never echoes anything back to the terminal window. I believe it must need some sort of driver or something else, but I am completely in the dark on this. If it matters I also have: - Logitech wireless keyboard/mouse USB receiver - Tenda USB wireless adapter And.....I've also noticed some errors now that show in dmesg that seem to somehow related to HDMI if that makes any sense: 46721.144731] HDMI: ELD buf size is 0, force 128 [46721.144749] HDMI: invalid ELD data byte 0 [46721.444025] HDMI: ELD buf size is 0, force 128 [46721.444061] HDMI: invalid ELD data byte 0 [46721.743375] HDMI: ELD buf size is 0, force 128 [46721.743411] HDMI: invalid ELD data byte 0 [46722.043092] HDMI: ELD buf size is 0, force 128 [46722.043118] HDMI: invalid ELD data byte 0 [46722.343086] HDMI: ELD buf size is 0, force 128 [46722.343122] HDMI: invalid ELD data byte 0 [46722.642517] HDMI: ELD buf size is 0, force 128 [46722.642574] HDMI: invalid ELD data byte 0 [46722.942459] HDMI: ELD buf size is 0, force 128 [46722.942485] HDMI: invalid ELD data byte 0 [46723.242103] HDMI: ELD buf size is 0, force 128 [46723.242129] HDMI: invalid ELD data byte 0 [46723.541877] HDMI: ELD buf size is 0, force 128 [46723.541923] HDMI: invalid ELD data byte 0 [58366.651954] HDMI: ELD buf size is 0, force 128 [58366.651980] HDMI: invalid ELD data byte 0 [58366.951523] HDMI: ELD buf size is 0, force 128 [58366.951549] HDMI: invalid ELD data byte 0 [58367.251075] HDMI: ELD buf size is 0, force 128 [58367.251121] HDMI: invalid ELD data byte 0 [58367.550517] HDMI: ELD buf size is 0, force 128 [58367.550563] HDMI: invalid ELD data byte 0 [58367.850219] HDMI: ELD buf size is 0, force 128 [58367.850256] HDMI: invalid ELD data byte 0 [58368.150160] HDMI: ELD buf size is 0, force 128 [58368.150185] HDMI: invalid ELD data byte 0 [58368.449544] HDMI: ELD buf size is 0, force 128 [58368.449570] HDMI: invalid ELD data byte 0 [58368.749583] HDMI: ELD buf size is 0, force 128 [58368.749629] HDMI: invalid ELD data byte 0 [58369.049280] HDMI: ELD buf size is 0, force 128 [58369.049326] HDMI: invalid ELD data byte 0 [58394.706273] HDMI: ELD buf size is 0, force 128 [58394.706300] HDMI: invalid ELD data byte 0 [58394.706350] HDMI: ELD buf size is 0, force 128 [58394.706367] HDMI: invalid ELD data byte 0 [58395.003032] HDMI: ELD buf size is 0, force 128 [58395.003058] HDMI: invalid ELD data byte 0 [58395.302680] HDMI: ELD buf size is 0, force 128 [58395.302705] HDMI: invalid ELD data byte 0 [58395.602442] HDMI: ELD buf size is 0, force 128 [58395.602477] HDMI: invalid ELD data byte 0 [58395.902143] HDMI: ELD buf size is 0, force 128 [58395.902179] HDMI: invalid ELD data byte 0 [58396.201839] HDMI: ELD buf size is 0, force 128 [58396.201875] HDMI: invalid ELD data byte 0 [58396.501538] HDMI: ELD buf size is 0, force 128 [58396.501574] HDMI: invalid ELD data byte 0 [58396.801232] HDMI: ELD buf size is 0, force 128 [58396.801268] HDMI: invalid ELD data byte 0 [58397.100583] HDMI: ELD buf size is 0, force 128 [58397.100627] HDMI: invalid ELD data byte 0 [63095.766042] systemd-hostnamed[8875]: Warning: nss-myhostname is not installed. Changing the local hostname might make it unresolveable. Please install nss-myhostname! dave@davepc:~$ EDIT: Maybe another way to look at this would be what does Ubuntu do or not do that OpenELEC does or doesn't do (on Raspberry Pi) such that it works in OpenELEC but not in Ubuntu?

    Read the article

  • CodePlex Daily Summary for Saturday, February 05, 2011

    CodePlex Daily Summary for Saturday, February 05, 2011Popular ReleasesNuclex Framework: R1323: This release is a pure XNA 4.0 release that no longer includes any XNA 3.1 binaries or projects. All x86 assemblies have been compiled targeting the .NET 4.0 Client Profile. Requires either Visual C# 2010 Express or Visual Studio 2010, both with XNA Game Studio 4.0. 3rd party libraries needed to compile and run the source code are included, so everything will compile out of the box. Changes: - Thanks to a generous contribution by Adrian Tsai, the TrueType importer now accepts standard Windo...Community Forums NNTP bridge: Community Forums NNTP Bridge V43: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has added some features / bugfixes: Bugfix: Now supporting multi-line headers in all headers ;) / Thanks to Kai Schätzl for reporting this! Debug output optimized / Added a "Copy to clipboard" button in the debug windowFacebook C# SDK: 5.0.2 (BETA): PLEASE TAKE A FEW MINUTES TO GIVE US SOME FEEDBACK: Facebook C# SDK Survey This is third BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. This release contains some breaking changes. Particularly with authentication. After spending time reviewing the trouble areas that people are having using th...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 3.0.0 for MVC3: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. ChangelogTargeting ASP.NET MVC 3 and .NET 4.0 Additional UpdatePriority options for generating XML sitemaps Allow to specify target on SiteMapTitleAttribute One action with multiple routes and breadcrumbs Medium Trust optimizations Create SiteMapTitleAttribute for setting parent title IntelliSense for your sitemap with MvcSiteMapSchem...Rawr: Rawr 4.0.18 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OpenStreetMap 1.1 beta3: This is the beta3 release for the ArcGIS Editor for OpenStreetMap version 1.1. Bug fixes in beta3: make the user interface for editing attributes keyboard friendly make the geoprocessing tools available for Python scripting (incl. sample scripts in the tool documentation) change in the logic for sending updates to the OpenStreetMap server updates to point symbology for the feature templates Changes from version 1.0: Multi-part geometries are now supported. Homogeneous relations (consi...patterns & practices SharePoint Guidance: SharePoint Guidance 2010 Hands On Lab: SharePoint Guidance 2010 Hands On Lab consists of six labs: one for logging, one for service location, and four for application setting manager. Each lab takes about 20 minutes to walk through. Each lab consists of a PDF document. You can go through the steps in the doc to create solution and then build/deploy the solution and run the lab. For those of you who wants to save the time, we included the final solution so you can just build/deploy the solution and run the lab.Mobile Device Detection and Redirection: 0.1.11.11: Improvements to Beta Release The following changes have been made in version 0.1.11.11: BlackBerry Version 6 devices (such as the 9800 Torch) are now correctly identified with a dedicated handler. Android powered devices are now correctly identified. Minor change to Provider.cs to improve performance and optimise data sent to 51Degrees.mobi if the option is enabled. GC.collect is no longer called at any point. All garbage collection now happens automatically IMPORTANT CHANGES This rele...TweetSharp: TweetSharp v2.0.0.0 - Preview 10: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 9 ChangesAdded support for trends Added support for Silverlight 4 Elevated WP7 fixes Third Party Library VersionsHammock v1.1.7: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comFacebook Graph Toolkit: Facebook Graph Toolkit 0.7: Version 0.7 updates (2 Feb 2011)new Facebook Graph objects: Link, Note, StatusMessage new publish features: status update, post with link attachment new Graph Api connections in User object: statuses, links, notes internal code path improvement on Api object bug fixed: extra "r" character appears for strings with "\r" symbols in Json Objects bug fixed: error when performing Postback to the same page Tutorial and documentation available at http://fbgraph.computerbeacon.netHammock for REST: Hammock v1.1.7: v1.1.7 ChangesAdded support for cookies Added support for custom Content-Disposition types Fixes based on user feedback Supported Platforms.NET 2.0 .NET 3.5 SP1 and .NET 3.5 Client Profile .NET 4.0 and .NET 4.0 Client Profile Windows Phone 7 Silverlight 3 and 4 Mono 2.6 (See Mono and HTTPS)Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (February 2011): Next release of Phalanger; again faster, more stable and ready for daily use. Based on many user experiences this release is one more step closer to be perfect compiler and runtime of your old PHP applications; or perfect platform for migrating to .NET. February 2011 release of Phalanger introduces several changes, enhancements and fixes. See complete changelist for all the changes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger....Chemistry Add-in for Word: Chemistry Add-in for Word - Version 1.0: On February 1, 2011, we announced the availability of version 1 of the Chemistry Add-in for Word, as well as the assignment of the open source project to the Outercurve Foundation by Microsoft Research and the University of Cambridge. System RequirementsHardware RequirementsAny computer that can run Office 2007 or Office 2010. Software RequirementsYour computer must have the following software: Any version of Windows that can run Office 2007 or Office 2010, which includes Windows XP SP3 and...Minemapper: Minemapper v0.1.4: Updated mcmap, now supports new block types. Added a Worlds->'View Cache Folder' menu item.StyleCop for ReSharper: StyleCop for ReSharper 5.1.15005.000: Applied patch from rodpl for merging of stylecop setting files with settings in parent folder. Previous release: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new Objec...Minecraft Tools: Minecraft Topographical Survey 1.4: MTS requires version 4 of the .NET Framework - you must download it from Microsoft if you have not previously installed it. This version of MTS adds MCRegion support and fixes bugs that caused rendering to fail for some users. New in this version of MTS: Support for rendering worlds compressed with MCRegion Fixed rendering failure when encountering non-NBT files with the .dat extension Fixed rendering failure when encountering corrupt NBT files Minor GUI updates Note that the command...MVC Controls Toolkit: Mvc Controls Toolkit 0.8: Fixed the following bugs: *Variable name error in the jvascript file that prevented the use of the deleted item template of the Datagrid *Now after the changes applied to an item of the DataGrid are cancelled all input fields are reset to the very initial value they had. *Other minor bugs. Added: *This version is available both for MVC2, and MVC 3. The MVC 3 version has a release number of 0.85. This way one can install both version. *Client Validation support has been added to all control...Office Web.UI: Beta preview (Source): This is the first Beta. it includes full source code and all available controls. Some designers are not ready, and some features are not finalized allready (missing properties, draft styles) ThanksASP.net Ribbon: Version 2.2: This release brings some new controls (part of Office Web.UI). A few bugs are fixed and it includes the "auto resize" feature as you resize the window. (It can cause an infinite loop when the window is too reduced, it's why this release is not marked as "stable"). I will release more versions 2.3, 2.4... until V3 which will be the official launch of Office Web.UI. Both products will evolve at the same speed. Thanks.xUnit.net - Unit Testing for .NET: xUnit.net 1.7: xUnit.net release 1.7Build #1540 Important notes for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. This release adds the following new features: Added support for ASP.NET MVC 3 Added Assert.Equal(double expected, double actual, int precision) Ad...New Projects.NET Micro Framework PTP library: .NET Micro Framework PTP library is implementation of Picture Transfer Protocol for .NET Micro Framework. It's developed in C# language. This library allows microcontroller with .NET Micro Framework to communicate with digital cameras. Asp.net learning: asp.net learningbrainydexter demos: Demos I have developed over time to showcase different techniques, ranging from graphics/opengl to crazy language specific (C++/C#) techniquesBrickFramework: BrickFrameworkCodecoFW-SL: CodecoFW-SL é um Framework desenvolvido em C# para trabalhar com Silverlight. Contem alguns controles e extensões para ajudar o desenvolvimento com Silverlight.Csharp Learning: My C # Learning examplejigsby: personal code dumpJuego de la Vida: Crearas vida con el mouseLogon Screen Launcher: Allows you to run applications at the Windows (XP/Vista/7) logon screen (Ctrl+Alt+Del) on system events, including logon/logoff, screen lock/unlock and startup/shutdown.MEF Silverlight Control Extensions: The Mef Silverlight Control Extensions gives you a declarative way of implement control importing using MEF.Membership, Roles and Profile Library (MRPLibrary): This project provides a simple abstraction for the Membership, Roles and ProfileManager ASP.NET providers as well as ASP.NET FormsAuthentication. The library creates required database objects automatically and uses web.config Membership, Roles and ProfileManager sections.Netpad: Basic .Net based text document editorOrchard Jumpstart: A jumpstart Orchard module, implementing basic module functionality. Created to make Orchard module creation a bit quicker:)Orchard Rewrite Rules: Orchard module to add rewrite rules to your website using Apache .htaccess file format.Outlook Social Connectors: The Outlook Social Connectors project was started as a 24 Hour Challenge Project. The project has several Outlook Social Connectors (Twitter, Fogbugz, ...) and aims to provide a framework for developing new connectors.Rail.Net: Small rail net application For rail analysisregistrudecasa: registrudecasaRelDB: A true relational database management system compatible with Tutorial D.RTP Tooltip: This DNN module instantiates an instance of DNNToolTipManager and allows you to enter a list of ClientIDs to TooltipifySSIS Report Generator Task (Custom Control Flow Component): SSIS Report Generator Task (Custom Control Flow Component)The Ministry of Technology Framework Extensions: The aim of the MOT Framework extensions project is to offer a variety of solutions for common 'boilerplate' development requirements and speed up the development process. Ongoing discussions and information on the library is posted up to http://www.theministryoftechnology.co.ukWindowsPhone 7 Live Soccer Scores: An Windows Phone 7 app which displays the live soccer scores from the Dutch eredivisie, English premier league, Champions league and Europa league.Wolfram Alpha Api 2.0: 20 january 2011 open new version of wolfram alpha api, version 2.0. This project will help you to work with the new api. Knowledge is power!Wurfl 51degrees Mobile Capabilities Viewer: A Web App to display all the Mobile Capabilities for a given user Agent, uses 51degrees.Codeplex.com .Net dll and Wurfl device files. The current sample in the downloads section of [http://51degrees.codeplex.com] doesnt display all Browser and Wurfl Capabilities for a UserAgent.XNA Command Console (XNACC): XNACC is a component that adds an interactive command console to your XNA project. It supports many built-in commands, as well as custom commands, key bindings, simple functions (macros), console variables and can use functions in external assemblies. Implemented in C#/VS2010.Xteq5: Xteq5 is an (hopefully) easy to use open source Windows computer management solution to get the job done.ZipArchiveReader: ZipLib is a ZIP file reader. It provides a simple way to read and write .zip files. The purpose of ZipLib is give ZIP file capabilities to ASP.Net applications which were granted minimum permissions. It can be a partial trust DLL that can run in Internet Zone and probably ...

    Read the article

  • Create Orchard Module in a Separate Project

    - by Steve Michelotti
    The Orchard Project is a new OOS Microsoft project that is being developed up on CodePlex. From the Orchard home page on CodePlex, it states “Orchard project is focused on delivering a .NET-based CMS application that will allow users to rapidly create content-driven Websites, and an extensibility framework that will allow developers and customizers to provide additional functionality through modules and themes.” The Orchard Project site contains additional information including documentation and walkthroughs. The ability to create a composite solution based on a collection of modules is a compelling feature. In Orchard, these modules can just be created as simple MVC Areas or they can also be created inside of stand-alone web application projects.  The walkthrough for writing an Orchard module that is available on the Orchard site uses a simple Area that is created inside of the host application. It is based on the Orchard MIX presentation. This walkthrough does an effective job introducing various Orchard concepts such as hooking into the navigation system, theme/layout system, content types, and more.  However, creating an Orchard module in a separate project does not seem to be concisely documented anywhere. Orchard ships with several module OOTB that are in separate assemblies – but again, it’s not well documented how to get started building one from scratch. The following are the steps I took to successfully get an Orchard module in a separate project up and running. Step 1 – Download the OrchardIIS.zip file from the Orchard Release page. Unzip and open up the solution. Step 2 – Add your project to the solution. I named my project “Orchard.Widget” and used and “MVC 2 Empty Web Application” project type. Make sure you put the physical path inside the “Modules” sub-folder to the main project like this: At this point the solution should look like: Step 3 – Add assembly references to Orchard.dll and Orchard.Core.dll. Step 4 – Add a controller and view.  I’ll just create a Hello World controller and view. Notice I created the view as a partial view (*.ascx). Also add the [Themed] attribute to the top of the HomeController class just like the normal Orchard walk through shows it. Step 5 – Add Module.txt to the project root. The is a very important step. Orchard will not recognize your module without this text file present.  It can contain just the name of your module: name: Widget Step 6 – Add Routes.cs. Notice I’ve given an area name of “Orchard.Widget” on lines 26 and 33. 1: using System; 2: using System.Collections.Generic; 3: using System.Web.Mvc; 4: using System.Web.Routing; 5: using Orchard.Mvc.Routes; 6:   7: namespace Orchard.Widget 8: { 9: public class Routes : IRouteProvider 10: { 11: public void GetRoutes(ICollection<RouteDescriptor> routes) 12: { 13: foreach (var routeDescriptor in GetRoutes()) 14: { 15: routes.Add(routeDescriptor); 16: } 17: } 18:   19: public IEnumerable<RouteDescriptor> GetRoutes() 20: { 21: return new[] { 22: new RouteDescriptor { 23: Route = new Route( 24: "Widget/{controller}/{action}/{id}", 25: new RouteValueDictionary { 26: {"area", "Orchard.Widget"}, 27: {"controller", "Home"}, 28: {"action", "Index"}, 29: {"id", ""} 30: }, 31: new RouteValueDictionary(), 32: new RouteValueDictionary { 33: {"area", "Orchard.Widget"} 34: }, 35: new MvcRouteHandler()) 36: } 37: }; 38: } 39: } 40: } Step 7 – Add MainMenu.cs. This will make sure that an item appears in the main menu called “Widget” which points to the module. 1: using System; 2: using Orchard.UI.Navigation; 3:   4: namespace Orchard.Widget 5: { 6: public class MainMenu : INavigationProvider 7: { 8: public void GetNavigation(NavigationBuilder builder) 9: { 10: builder.Add(menu => menu.Add("Widget", item => item.Action("Index", "Home", new 11: { 12: area = "Orchard.Widget" 13: }))); 14: } 15:   16: public string MenuName 17: { 18: get { return "main"; } 19: } 20: } 21: } Step 8 – Clean up web.config. By default Visual Studio adds numerous sections to the web.config. The sections that can be removed are: appSettings, connectionStrings, authentication, membership, profile, and roleManager. Step 9 – Delete Global.asax. This project will ultimately be running from inside the Orchard host so this “sub-site” should not have its own Global.asax.   Now you’re ready the run the app.  When you first run it, the “Widget” menu item will appear in the main menu because of the MainMenu.cs file we added: We can then click the “Widget” link in the main menu to send us over to our view:   Packaging From start to finish, it’s a relatively painless experience but it could be better. For example, a Visual Studio project template that encapsulates aspects from this blog post would definitely make it a lot easier to get up and running with creating an Orchard module.  Another aspect I found interesting is that if you read the first paragraph of the walkthrough, it says, “You can also develop modules as separate projects, to be packaged and shared with other users of Orchard CMS (the packaging story is still to be defined, along with marketplaces for sharing modules).” In particular, I will be extremely curious to see how the “packaging story” evolves. The first thing that comes to mind for me is: what if we explored MvcContrib Portable Areas as a potential mechanism for this packaging? This would certainly make things easy since all artifacts (aspx, aspx, images, css, javascript) are all wrapped up into a single assembly. Granted, Orchard does have its own infrastructure for layouts and themes but it seems like integrating portable areas into this pipeline would not be a difficult undertaking. Maybe that’ll be the next research task. :)

    Read the article

  • F# for the C# Programmer

    - by mbcrump
    Are you a C# Programmer and can’t make it past a day without seeing or hearing someone mention F#?  Today, I’m going to walk you through your first F# application and give you a brief introduction to the language. Sit back this will only take about 20 minutes. Introduction Microsoft's F# programming language is a functional language for the .NET framework that was originally developed at Microsoft Research Cambridge by Don Syme. In October 2007, the senior vice president of the developer division at Microsoft announced that F# was being officially productized to become a fully supported .NET language and professional developers were hired to create a team of around ten people to build the product version. In September 2008, Microsoft released the first Community Technology Preview (CTP), an official beta release, of the F# distribution . In December 2008, Microsoft announced that the success of this CTP had encouraged them to escalate F# and it is now will now be shipped as one of the core languages in Visual Studio 2010 , alongside C++, C# 4.0 and VB. The F# programming language incorporates many state-of-the-art features from programming language research and ossifies them in an industrial strength implementation that promises to revolutionize interactive, parallel and concurrent programming. Advantages of F# F# is the world's first language to combine all of the following features: Type inference: types are inferred by the compiler and generic definitions are created automatically. Algebraic data types: a succinct way to represent trees. Pattern matching: a comprehensible and efficient way to dissect data structures. Active patterns: pattern matching over foreign data structures. Interactive sessions: as easy to use as Python and Mathematica. High performance JIT compilation to native code: as fast as C#. Rich data structures: lists and arrays built into the language with syntactic support. Functional programming: first-class functions and tail calls. Expressive static type system: finds bugs during compilation and provides machine-verified documentation. Sequence expressions: interrogate huge data sets efficiently. Asynchronous workflows: syntactic support for monadic style concurrent programming with cancellations. Industrial-strength IDE support: multithreaded debugging, and graphical throwback of inferred types and documentation. Commerce friendly design and a viable commercial market. Lets try a short program in C# then F# to understand the differences. Using C#: Create a variable and output the value to the console window: Sample Program. using System;   namespace ConsoleApplication9 {     class Program     {         static void Main(string[] args)         {             var a = 2;             Console.WriteLine(a);             Console.ReadLine();         }     } } A breeze right? 14 Lines of code. We could have condensed it a bit by removing the “using” statment and tossing the namespace. But this is the typical C# program. Using F#: Create a variable and output the value to the console window: To start, open Visual Studio 2010 or Visual Studio 2008. Note: If using VS2008, then please download the SDK first before getting started. If you are using VS2010 then you are already setup and ready to go. So, click File-> New Project –> Other Languages –> Visual F# –> Windows –> F# Application. You will get the screen below. Go ahead and enter a name and click OK. Now, you will notice that the Solution Explorer contains the following: Double click the Program.fs and enter the following information. Hit F5 and it should run successfully. Sample Program. open System let a = 2        Console.WriteLine a As Shown below: Hmm, what? F# did the same thing in 3 lines of code. Show me the interactive evaluation that I keep hearing about. The F# development environment for Visual Studio 2010 provides two different modes of execution for F# code: Batch compilation to a .NET executable or DLL. (This was accomplished above). Interactive evaluation. (Demo is below) The interactive session provides a > prompt, requires a double semicolon ;; identifier at the end of a code snippet to force evaluation, and returns the names (if any) and types of resulting definitions and values. To access the F# prompt, in VS2010 Goto View –> Other Window then F# Interactive. Once you have the interactive window type in the following expression: 2+3;; as shown in the screenshot below: I hope this guide helps you get started with the language, please check out the following books for further information. F# Books for further reading   Foundations of F# Author: Robert Pickering An introduction to functional programming with F#. Including many samples, this book walks through the features of the F# language and libraries, and covers many of the .NET Framework features which can be leveraged with F#.       Functional Programming for the Real World: With Examples in F# and C# Authors: Tomas Petricek and Jon Skeet An introduction to functional programming for existing C# developers written by Tomas Petricek and Jon Skeet. This book explains the core principles using both C# and F#, shows how to use functional ideas when designing .NET applications and presents practical examples such as design of domain specific language, development of multi-core applications and programming of reactive applications.

    Read the article

  • List of Commonly Used Value Types in XNA Games

    - by Michael B. McLaughlin
    Most XNA programmers are concerned about generating garbage. More specifically about allocating GC-managed memory (GC stands for “garbage collector” and is both the name of the class that provides access to the garbage collector and an acronym for the garbage collector (as a concept) itself). Two of the major target platforms for XNA (Windows Phone 7 and Xbox 360) use variants of the .NET Compact Framework. On both variants, the GC runs under various circumstances (Windows Phone 7 and Xbox 360). Of concern to XNA programmers is the fact that it runs automatically after a fixed amount of GC-managed memory has been allocated (currently 1MB on both systems). Many beginning XNA programmers are unaware of what constitutes GC-managed memory, though. So here’s a quick overview. In .NET, there are two different “types” of types: value types and reference types. Only reference types are managed by the garbage collector. Value types are not managed by the garbage collector and are instead managed in other ways that are implementation dependent. For purposes of XNA programming, the important point is that they are not managed by the GC and thus do not, by themselves, increment that internal 1 MB allocation counter. (n.b. Structs are value types. If you have a struct that has a reference type as a member, then that reference type, when instantiated, will still be allocated in the GC-managed memory and will thus count against the 1 MB allocation counter. Putting it in a struct doesn’t change the fact that it gets allocated on the GC heap, but the struct itself is created outside of the GC’s purview). Both value types and reference types use the keyword ‘new’ to allocate a new instance of them. Sometimes this keyword is hidden by a method which creates new instances for you, e.g. XmlReader.Create. But the important thing to determine is whether or not you are dealing with a value types or a reference type. If it’s a value type, you can use the ‘new’ keyword to allocate new instances of that type without incrementing the GC allocation counter (except as above where it’s a struct with a reference type in it that is allocated by the constructor, but there are no .NET Framework or XNA Framework value types that do this so it would have to be a struct you created or that was in some third-party library you were using for that to even become an issue). The following is a list of most all of value types you are likely to use in a generic XNA game: AudioCategory (used with XACT; not available on WP7) AvatarExpression (Xbox 360 only, but exposed on Windows to ease Xbox development) bool BoundingBox BoundingSphere byte char Color DateTime decimal double any enum (System.Enum itself is a class, but all enums are value types such that there are no GC allocations for enums) float GamePadButtons GamePadCapabilities GamePadDPad GamePadState GamePadThumbSticks GamePadTriggers GestureSample int IntPtr (rarely but occasionally used in XNA) KeyboardState long Matrix MouseState nullable structs (anytime you see, e.g. int? something, that ‘?’ denotes a nullable struct, also called a nullable type) Plane Point Quaternion Ray Rectangle RenderTargetBinding sbyte (though I’ve never seen it used since most people would just use a short) short TimeSpan TouchCollection TouchLocation TouchPanelCapabilities uint ulong ushort Vector2 Vector3 Vector4 VertexBufferBinding VertexElement VertexPositionColor VertexPositionColorTexture VertexPositionNormalTexture VertexPositionTexture Viewport So there you have it. That’s not quite a complete list, mind you. For example: There are various structs in the .NET framework you might make use of. I left out everything from the Microsoft.Xna.Framework.Graphics.PackedVector namespace, since everything in there ventures into the realm of advanced XNA programming anyway (n.b. every single instantiable thing in that namespace is a struct and thus a value type; there are also two interfaces but interfaces cannot be instantiated at all and thus don’t figure in to this discussion). There are so many enums you’re likely to use (PlayerIndex, SpriteSortMode, SpriteEffects, SurfaceFormat, etc.) that including them would’ve flooded the list and reduced its utility. So I went with “any enum” and trust that you can figure out what the enums are (and it’s rare to use ‘new’ with an enum anyway). That list also doesn’t include any of the pre-defined static instances of some of the classes (e.g. BlendState.AlphaBlend, BlendState.Opaque, etc.) which are already allocated such that using them doesn’t cause any new allocations and therefore doesn’t increase that 1 MB counter. That list also has a few misleading things. VertexElement, VertexPositionColor, and all the other vertex types are structs. But you’re only likely to ever use them as an array (for use with VertexBuffer or DynamicVertexBuffer), and all arrays are reference types (even arrays of value types such as VertexPositionColor[ ] or int[ ]). * So that’s it for now. The note below may be a bit confusing (it deals with how the GC works and how arrays are managed in .NET). If so, you can probably safely ignore it for now but feel free to ask any questions regardless. * Arrays of value types (where the value type doesn’t contain any reference type members) are much faster for the GC to examine than arrays of reference types, so there is a definite benefit to using arrays of value types where it makes sense. But creating arrays of value types does cause the GC’s allocation counter to increase. Indeed, allocating a large array of a value type is one of the quickest ways to increment the allocation counter since a .NET array is a sequential block of memory. An array of reference types is just a sequential block of references (typically 4 bytes each) while an array of value types is a sequential block of instances of that type. So for an array of Vector3s it would be 12 bytes each since each float is 4 bytes and there are 3 in a Vector3; for an array of VertexPositionNormalTexture structs it would typically be 32 bytes each since it has two Vector3s and a Vector2. (Note that there are a few additional bytes taken up in the creation of an array, typically 12 but sometimes 16 or possibly even more, which depend on the implementation details of the array type on the particular platform the code is running on).

    Read the article

  • strange sqares like hints in Silverlight application?

    - by lina
    Good day! Strange square appears on mouse hover on text boxes, buttons, etc (something like hint) in a silverlight navigation application - how can I remove it? a scrin shot an example .xaml page: <Code:BasePage x:Class="CAP.Views.Main" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:Code="clr-namespace:CAP.Code" d:DesignWidth="640" d:DesignHeight="480" Title="?????? ??????? ???????? ??? ?????"> <Grid x:Name="LayoutRoot"> <Grid.RowDefinitions> <RowDefinition Height="103*" /> <RowDefinition Height="377*" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="120*" /> <ColumnDefinition Width="520*" /> </Grid.ColumnDefinitions> <Image Height="85" HorizontalAlignment="Left" Name="image1" Stretch="Fill" VerticalAlignment="Top" Width="84" Margin="12,0,0,0" ImageFailed="image1_ImageFailed" Source="/CAP;component/Images/My-Computer.png" /> <TextBlock Grid.Column="1" Height="Auto" TextWrapping="Wrap" HorizontalAlignment="Left" Margin="0,12,0,0" Name="textBlock1" Text="Good day!" VerticalAlignment="Top" FontFamily="Verdana" FontSize="16" Width="345" FontWeight="Bold" /> <TextBlock Grid.Column="1" Grid.Row="1" TextWrapping="Wrap" Height="299" HorizontalAlignment="Left" Name="textBlock2" VerticalAlignment="Top" FontFamily="Verdana" FontSize="14" Width="441" > <Run Text="Some text "/><LineBreak/><LineBreak/><Run Text="and so on"/> <LineBreak/> </TextBlock> </Grid> xaml.cs: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Windows.Navigation; using CAP.Code; namespace CAP.Views { public partial class Main : BasePage { public Main() : base() { InitializeComponent(); MapBuilder.AddToMap(new SiteMapUnit() { Caption = "???????", RelativeUrl = "Main" },true); ((App)Application.Current).Mainpage.tvMainMenu.SelectedItems.Clear(); } // Executes when the user navigates to this page. protected override void OnNavigatedTo(NavigationEventArgs e) { } private void image1_ImageFailed(object sender, ExceptionRoutedEventArgs e) { } protected override string[] NeededPermission() { return new string[0]; } } } MainPage.xaml <UserControl x:Class="CAP.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Code="clr-namespace:CAP.Code" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:uriMapper="clr-namespace:System.Windows.Navigation;assembly=System.Windows.Controls.Navigation" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:telerik="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls" xmlns:telerikNavigation="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Navigation" mc:Ignorable="d" Margin="0,0,0,0" Width="auto" Height="auto" xmlns:dataInput="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data.Input"> <ScrollViewer Width="auto" Height="auto" BorderBrush="White" BorderThickness="0" Margin="0,0,0,0" x:Name="sV" HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto" > <ScrollViewer.Content> <Grid Width="auto" Height="auto" x:Name="LayoutRoot" Style="{StaticResource LayoutRootGridStyle}" Margin="0,0,0,0"> <StackPanel Width="auto" Height="auto" Orientation="Vertical" Margin="250,0,0,50"> <Border x:Name="ContentBorder2" Margin="0,0,0,0" > <!--<navigation:Frame Margin="0,0,0,0" Width="auto" Height="auto" x:Name="AnotherFrame" VerticalAlignment="Top" Style="{StaticResource ContentFrameStyle}" Source="/Views/Menu.xaml" NavigationFailed="ContentFrame_NavigationFailed" JournalOwnership="OwnsJournal" Loaded="AnotherFrame_Loaded"> </navigation:Frame>--> <StackPanel Orientation="Vertical" Height="82" Width="Auto" HorizontalAlignment="Right" Margin="0,0,0,0" DataContext="{Binding}"> <TextBlock HorizontalAlignment="Right" Foreground="White" x:Name="ApplicationNameTextBlock4" Style="{StaticResource ApplicationNameStyle}" FontSize="20" Text="?????? ???????" Margin="20,16,20,0"/> <StackPanel Orientation="Horizontal" HorizontalAlignment="Right"> <Image x:Name="imDoor" Visibility="Collapsed" MouseEnter="imDoor_MouseEnter" MouseLeave="imDoor_MouseLeave" Height="24" Stretch="Fill" Width="25" Margin="10,0,10,0" Source="/CAP;component/Images/sm_white_doors.png" MouseLeftButtonDown="bTest_Click" /> <TextBlock x:Name="bLogout" MouseEnter="bLogout_MouseEnter" MouseLeave="bLogout_MouseLeave" TextDecorations="Underline" Margin="0,6,20,4" Height="23" Text="?????" HorizontalAlignment="Right" Visibility="Collapsed" MouseLeftButtonDown="bTest_Click" FontFamily="Verdana" FontSize="13" FontWeight="Normal" Foreground="#FF1C1C92" /> </StackPanel> </StackPanel> </Border> <Border x:Name="bSiteMap" Margin="0,0,0,0" > <StackPanel x:Name="spSiteMap" Orientation="Horizontal" Height="20" Width="Auto" HorizontalAlignment="Left" Margin="0,0,0,0" DataContext="{Binding}"> <!-- <TextBlock Visibility="Visible" TextDecorations="Underline" Height="23" HorizontalAlignment="Left" x:Name="ar" Text="1" VerticalAlignment="Top" Foreground="Blue" FontFamily="Verdana" FontSize="13" /> <TextBlock Visibility="Visible" Height="23" HorizontalAlignment="Left" x:Name="Map" Text="->" VerticalAlignment="Top" Foreground="Blue" FontFamily="Verdana" FontSize="13" /> <TextBlock Visibility="Visible" TextDecorations="Underline" Height="23" HorizontalAlignment="Left" x:Name="ar1" Text="2" VerticalAlignment="Top" Foreground="Blue" FontFamily="Verdana" FontSize="13" /> <TextBlock Visibility="Visible" Height="23" HorizontalAlignment="Left" x:Name="Map1" Text="->" VerticalAlignment="Top" Foreground="Blue" FontFamily="Verdana" FontSize="13" /> <TextBlock Visibility="Visible" TextDecorations="Underline" Height="23" HorizontalAlignment="Left" x:Name="ar2" Text="3" VerticalAlignment="Top" Foreground="Blue" FontFamily="Verdana" FontSize="13" />--> </StackPanel> </Border> <Border Width="auto" Height="auto" x:Name="ContentBorder" Margin="0,0,0,0" > <navigation:Frame x:Name="ContentFrame" Style="{StaticResource ContentFrameStyle}" Source="Main" Navigated="ContentFrame_Navigated" NavigationFailed="ContentFrame_NavigationFailed" ToolTipService.ToolTip=" " Margin="0,0,0,0"> <navigation:Frame.UriMapper> <uriMapper:UriMapper> <!--Client--> <uriMapper:UriMapping Uri="RegistrateClient" MappedUri="/Views/Client/RegistrateClient.xaml"/> <!--So on--> </uriMapper:UriMapper> </navigation:Frame.UriMapper> </navigation:Frame> </Border> </StackPanel> <Grid x:Name="NavigationGrid" Style="{StaticResource NavigationGridStyle}" Margin="0,0,0,0" Background="{x:Null}" > <StackPanel Orientation="Vertical" Height="Auto" Width="250" HorizontalAlignment="Center" Margin="0,0,0,50" DataContext="{Binding}"> <Image Width="150" Height="90" HorizontalAlignment="Center" VerticalAlignment="Top" Source="/CAP;component/Images/logo__au.png" Margin="0,20,0,70"/> <Border x:Name="BrandingBorder" MinHeight="222" Width="250" Style="{StaticResource BrandingBorderStyle3}" HorizontalAlignment="Center" Opacity="60" Margin="0,0,0,0"> <Border.Background> <ImageBrush ImageSource="/CAP;component/Images/papka.png"/> </Border.Background> <Grid Width="250" x:Name="LichniyCabinet" Margin="0,10,0,0" HorizontalAlignment="Center" Height="211"> <Grid.ColumnDefinitions> <ColumnDefinition Width="19*" /> <ColumnDefinition Width="62*" /> <ColumnDefinition Width="151*" /> <ColumnDefinition Width="18*" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="13" /> <RowDefinition Height="24" /> <RowDefinition Height="35" /> <RowDefinition Height="35" /> <RowDefinition Height="43" /> <RowDefinition Height="28" /> <RowDefinition Height="32*" /> </Grid.RowDefinitions> <TextBlock Visibility="Visible" Grid.Row="2" Height="23" HorizontalAlignment="Left" x:Name="tLogin" Text="?????" VerticalAlignment="Top" FontFamily="Verdana" FontSize="13" Foreground="White" Margin="1,0,0,0" Grid.Column="1" /> <TextBlock Visibility="Visible" FontFamily="Verdana" FontSize="13" Foreground="White" Height="23" HorizontalAlignment="Left" x:Name="tPassw" Text="??????" VerticalAlignment="Top" Grid.Row="3" Grid.Column="1" /> <TextBox Visibility="Visible" Grid.Column="2" Grid.Row="2" Height="24" HorizontalAlignment="Left" x:Name="logLogin" VerticalAlignment="Top" Width="150" /> <PasswordBox Visibility="Visible" Code:DefaultButtonService.DefaultButton="{Binding ElementName=bLogin}" PasswordChar="*" Height="24" HorizontalAlignment="Left" x:Name="logPassword" VerticalAlignment="Top" Width="150" Grid.Column="2" Grid.Row="3" /> <Button x:Name="bLogin" MouseEnter="bLogin_MouseEnter" MouseLeave="bLogin_MouseLeave" Visibility="Visible" Content="?????" Grid.Column="2" Grid.Row="4" Click="Button_Click" Height="23" HorizontalAlignment="Left" Margin="81,0,0,0" VerticalAlignment="Top" Width="70" /> <TextBlock MouseLeftButtonDown="ForgotPassword_MouseLeftButtonDown" MouseEnter="ForgotPassword_MouseEnter" MouseLeave="ForgotPassword_MouseLeave" Visibility="Visible" TextDecorations="Underline" Grid.ColumnSpan="2" Grid.Row="4" Height="23" HorizontalAlignment="Left" x:Name="ForgotPassword" Text="?????? ???????" VerticalAlignment="Top" Foreground="White" FontFamily="Verdana" FontSize="13" Grid.Column="1" /> <TextBlock MouseEnter="tbRegistration_MouseEnter" MouseLeave="tbRegistration_MouseLeave" MouseLeftButtonDown="tbRegistration_MouseLeftButtonDown" Grid.Column="2" Grid.Row="6" Height="23" x:Name="tbRegistration" TextDecorations="Underline" Text="???????????" VerticalAlignment="Top" FontFamily="Verdana" FontSize="13" TextAlignment="Center" HorizontalAlignment="Center" Foreground="#FF1C1C92" FontWeight="Normal" Margin="0,0,57,0" /> <TextBlock Cursor="Arrow" Height="23" HorizontalAlignment="Left" Margin="11,-3,0,0" Text="?????? ???????" VerticalAlignment="Top" Grid.ColumnSpan="3" Grid.RowSpan="2" FontFamily="Verdana" FontSize="13" FontWeight="Bold" Foreground="White" /> <Image Visibility="Collapsed" Height="70" x:Name="imUser" Stretch="Fill" Width="70" Grid.ColumnSpan="2" Margin="11,0,0,0" Grid.Row="2" Grid.RowSpan="2" Source="/CAP;component/Images/user2.png" /> <TextBlock x:Name="tbHello" Grid.Column="2" Visibility="Collapsed" Grid.Row="2" Height="auto" TextWrapping="Wrap" HorizontalAlignment="Left" Margin="6,0,0,0" Text="" VerticalAlignment="Top" FontFamily="Verdana" FontSize="13" Foreground="White" Width="145" /> </Grid> </Border> <Border x:Name="MenuBorder" Margin="0,0,0,50" Width="250" Visibility="Collapsed"> <StackPanel x:Name="spMenu" Width="240" HorizontalAlignment="Left"> <telerikNavigation:RadTreeView x:Name="tvMainMenu" Width="240" Selected="TreeView1_Selected" SelectedValuePath="Text" telerik:Theming.Theme="Windows7" FontFamily="Verdana" FontSize="12"/> </StackPanel> </Border> </StackPanel> </Grid> <Border x:Name="FooterBorder" VerticalAlignment="Bottom" Width="auto" Height="76"> <Border.Background> <ImageBrush ImageSource="/CAP;component/Images/footer2.png" /> </Border.Background> <TextBlock x:Name="tbFooter" Height="24" Width="auto" Margin="0,20,0,0" TextAlignment="Center" HorizontalAlignment="Stretch" VerticalAlignment="Center" Foreground="White" FontFamily="Verdana" FontSize="11"> </TextBlock> </Border> </Grid> </ScrollViewer.Content> </ScrollViewer> </UserControl> MainPage.xaml.cs using System; using System.Collections.Generic; using System.Linq; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Navigation; using CAP.Code; using CAP.Registrator; using System.Windows.Input; using System.ComponentModel.DataAnnotations; using System.Windows.Browser; using Telerik.Windows.Controls; using System.Net; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Navigation; using System.Windows.Shapes; namespace CAP { public partial class MainPage { public App Appvars = Application.Current as App; private readonly RegistratorClient registrator; public SiteMapBuilder builder; public MainPage() { InitializeComponent(); sV.SetIsMouseWheelScrollingEnabled(true); builder = new SiteMapBuilder(spSiteMap); try { //working with service } catch { this.ContentFrame.Navigate(new Uri(String.Format("ErrorPage"), UriKind.RelativeOrAbsolute)); } } /// Recursive method to update the correct scrollviewer (if exists) private ScrollViewer CheckParent(FrameworkElement element) { ScrollViewer _result = element as ScrollViewer; if (element != null && _result == null) { FrameworkElement _temp = element.Parent as FrameworkElement; _result = CheckParent(_temp); } return _result; } // If an error occurs during navigation, show an error window private void ContentFrame_NavigationFailed(object sender, NavigationFailedEventArgs e) { e.Handled = true; ChildWindow errorWin = new ErrorWindow(e.Uri); errorWin.Show(); } } }

    Read the article

  • Goodbye XML&hellip; Hello YAML (part 2)

    - by Brian Genisio's House Of Bilz
    Part 1 After I explained my motivation for using YAML instead of XML for my data, I got a lot of people asking me what type of tooling is available in the .Net space for consuming YAML.  In this post, I will discuss a nice tooling option as well as describe some small modifications to leverage the extremely powerful dynamic capabilities of C# 4.0.  I will be referring to the following YAML file throughout this post Recipe: Title: Macaroni and Cheese Description: My favorite comfort food. Author: Brian Genisio TimeToPrepare: 30 Minutes Ingredients: - Name: Cheese Quantity: 3 Units: cups - Name: Macaroni Quantity: 16 Units: oz Steps: - Number: 1 Description: Cook the macaroni - Number: 2 Description: Melt the cheese - Number: 3 Description: Mix the cooked macaroni with the melted cheese Tooling It turns out that there are several implementations of YAML tools out there.  The neatest one, in my opinion, is YAML for .NET, Visual Studio and Powershell.  It includes a great editor plug-in for Visual Studio as well as YamlCore, which is a parsing engine for .Net.  It is in active development still, but it is certainly enough to get you going with YAML in .Net.  Start by referenceing YamlCore.dll, load your document, and you are on your way.  Here is an example of using the parser to get the title of the Recipe: var yaml = YamlLanguage.FileTo("Data.yaml") as Hashtable; var recipe = yaml["Recipe"] as Hashtable; var title = recipe["Title"] as string; In a similar way, you can access data in the Ingredients set: var yaml = YamlLanguage.FileTo("Data.yaml") as Hashtable; var recipe = yaml["Recipe"] as Hashtable; var ingredients = recipe["Ingredients"] as ArrayList; foreach (Hashtable ingredient in ingredients) { var name = ingredient["Name"] as string; } You may have noticed that YamlCore uses non-generic Hashtables and ArrayLists.  This is because YamlCore was designed to work in all .Net versions, including 1.0.  Everything in the parsed tree is one of two things: Hashtable, ArrayList or Value type (usually String).  This translates well to the YAML structure where everything is either a Map, a Set or a Value.  Taking it further Personally, I really dislike writing code like this.  Years ago, I promised myself to never write the words Hashtable or ArrayList in my .Net code again.  They are ugly, mostly depreciated collections that existed before we got generics in C# 2.0.  Now, especially that we have dynamic capabilities in C# 4.0, we can do a lot better than this.  With a relatively small amount of code, you can wrap the Hashtables and Array lists with a dynamic wrapper (wrapper code at the bottom of this post).  The same code can be re-written to look like this: dynamic doc = YamlDoc.Load("Data.yaml"); var title = doc.Recipe.Title; And dynamic doc = YamlDoc.Load("Data.yaml"); foreach (dynamic ingredient in doc.Recipe.Ingredients) { var name = ingredient.Name; } I significantly prefer this code over the previous.  That’s not all… the magic really happens when we take this concept into WPF.  With a single line of code, you can bind to the data dynamically in the view: DataContext = YamlDoc.Load("Data.yaml"); Then, your XAML is extremely straight-forward (Nothing else.  No static types, no adapter code.  Nothing): <StackPanel> <TextBlock Text="{Binding Recipe.Title}" /> <TextBlock Text="{Binding Recipe.Description}" /> <TextBlock Text="{Binding Recipe.Author}" /> <TextBlock Text="{Binding Recipe.TimeToPrepare}" /> <TextBlock Text="Ingredients:" FontWeight="Bold" /> <ItemsControl ItemsSource="{Binding Recipe.Ingredients}" Margin="10,0,0,0"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Quantity}" /> <TextBlock Text=" " /> <TextBlock Text="{Binding Units}" /> <TextBlock Text=" of " /> <TextBlock Text="{Binding Name}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> <TextBlock Text="Steps:" FontWeight="Bold" /> <ItemsControl ItemsSource="{Binding Recipe.Steps}" Margin="10,0,0,0"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Number}" /> <TextBlock Text=": " /> <TextBlock Text="{Binding Description}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </StackPanel> This nifty XAML binding trick only works in WPF, unfortunately.  Silverlight handles binding differently, so they don’t support binding to dynamic objects as of late (March 2010).  This, in my opinion, is a major lacking feature in Silverlight and I really hope we will see this feature available to us in Silverlight 4 Release.  (I am not very optimistic for Silverlight 4, but I can hope for the feature in Silverlight 5, can’t I?) Conclusion I still have a few things I want to say about using YAML in the .Net space including de-serialization and using IronRuby for your YAML parser, but this post is hopefully enough to see how easy it is to incorporate YAML documents in your code. Codeplex Site for YAML tools Dynamic wrapper for YamlCore

    Read the article

  • How to Answer a Stupid Interview Question the Right Way

    - by AjarnMark
    Have you ever been asked a stupid question during an interview; one that seemed to have no relation to the job responsibilities at all?  Tech people are often caught off-guard by these apparently irrelevant questions, but there is a way you can turn these to your favor.  Here is one idea. While chatting with a couple of folks between sessions at SQLSaturday 43 last weekend, one of them expressed frustration over a seemingly ridiculous and trivial question that she was asked during an interview, and she believes it cost her the job opportunity.  The question, as I remember it being described was, “What is the largest byte measurement?”.  The candidate made up a guess (“zetabyte”) during the interview, which is actually closer than she may have realized.  According to Wikipedia, there is a measurement known as zettabyte which is 10^21, and the largest one listed there is yottabyte at 10^24. My first reaction to this question was, “That’s just a hiring manager that doesn’t really know what they’re looking for in a candidate.  Furthermore, this tells me that this manager really does not understand how to build a team.”  In most companies, team interaction is more important than uber-knowledge.  I didn’t ask, but this could also be another geek on the team trying to establish their Alpha-Geek stature.  I suppose that there are a few, very few, companies that can build their businesses on hiring only the extreme alpha-geeks, but that certainly does not represent the majority of businesses in America. My friend who was there suggested that the appropriate response to this silly question would be, “And how does this apply to the work I will be doing?” Of course this is an understandable response when you’re frustrated because you know you can handle the technical aspects of the job, and it seems like the interviewer is just being silly.  But it is also a direct challenge, which may not be the best approach in interviewing.  I do have to admit, though, that there are those folks who just won’t respect you until you do challenge them, but again, I don’t think that is the majority. So after some thought, here is my suggestion: “Well, I know that there are petabytes and exabytes and things even larger than that, but I haven’t been keeping up on my list of Greek prefixes that have not yet been used, so I would have to look up the exact answer if you need it.  However, I have worked with databases as large as 30 Terabytes.  How big are the largest databases here at X Corporation?”  Perhaps with a follow-up of, “Typically, what I have seen in companies that have databases of your size, is that the three biggest challenges they face are: A, B, and C.  What would you say are the top 3 concerns that you would like the person you hire to be able to address?…Here is how I have dealt with those concerns in the past (or ‘Here is how I would tackle those issues for you…’).” Wait! What just happened?!  We took a seemingly irrelevant and frustrating question and turned it around into an opportunity to highlight our relevant skills and guide the conversation back in a direction more to our liking and benefit.  In more generic terms, here is what we did: Admit that you don’t know the specific answer off the top of your head, but can get it if it’s truly important to the company. Maybe for some reason it really is important to them. Mention something similar or related that you do know, reassuring them that you do have some knowledge in that subject area. Draw a parallel to your past work experience. Ask follow-up questions about the company’s specific needs and discuss how you can fulfill those. This type of thing requires practice and some forethought.  I didn’t come up with this answer until a day later, which is too late when you’re interviewing.  I still think it is silly for an interviewer to ask something like that, but at least this is one way to spin it to your advantage while you consider whether you really want to work for someone who would ask a thing like that.  Remember, interviewing is a two-way process.  You’re deciding whether you want to work there just as much as they are deciding whether they want you. There is always the possibility that this was a calculated maneuver on the part of the hiring manager just to see how quickly you think on your feet and how you handle stupid questions.  Maybe he knows something about the work environment and he’s trying to gauge whether you’ll actually fit in okay.  And if that’s the case, then the above response still works quite well.

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >