Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 816/832 | < Previous Page | 812 813 814 815 816 817 818 819 820 821 822 823  | Next Page >

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to read verbose VC++ linker output

    - by Assaf Lavie
    Trying to debug some linker errors, I turned on /VERBOSE and I'm trying to make sense of the output. It occurs to me that I really don't know how to read it. For example: 1>Compiling version info 1>Linking... 1>Starting pass 1 1>Processed /DEFAULTLIB:mfc80.lib 1>Processed /DEFAULTLIB:mfcs80.lib 1>Processed /DEFAULTLIB:msvcrt.lib 1>Processed /DEFAULTLIB:kernel32.lib 1>Processed /DEFAULTLIB:user32.lib .... 1>Processed /DEFAULTLIB:libgslcblasMD.lib 1>Searching libraries 1> Searching V:\Src\Solutions\\..\..\\Common\Win32\Lib\PlxApi.lib: 1> Searching ..\..\..\..\out\win32\release\lib\camerageometry.lib: 1> Searching ..\..\..\..\out\win32\release\lib\geometry.lib: 1> Found "public: __thiscall VisionMap::Geometry::Box2d::operator class VisionMap::Geometry::Box2DInt(void)const " (??BBox2d@Geometry@VisionMap@@QBE?AVBox2DInt@12@XZ) 1> Referenced in FocusDlg.obj 1> Loaded geometry.lib(Box2d.obj) 1>Processed /DEFAULTLIB:CGAL-vc80-mt.lib 1>Processed /DEFAULTLIB:boost_thread-vc80-mt-1_33_1.lib What's going on here? I think I understand this bit: 1>Processed /DEFAULTLIB:libgslcblasMD.lib 1>Searching libraries 1> Searching V:\Src\Solutions\\..\..\\Common\Win32\Lib\PlxApi.lib: 1> Searching ..\..\..\..\out\win32\release\lib\camerageometry.lib: 1> Searching ..\..\..\..\out\win32\release\lib\geometry.lib: 1> Found "public: __thiscall VisionMap::Geometry::Box2d::operator class VisionMap::Geometry::Box2DInt(void)const " (??BBox2d@Geometry@VisionMap@@QBE?AVBox2DInt@12@XZ) 1> Referenced in FocusDlg.obj 1> Loaded geometry.lib(Box2d.obj) It's trying to find the implementation of the above operator, which is used somewhere in FocusDlg.cpp, and it finds it in geometry.lib. But what does 1>Processed /DEFAULTLIB:libgslcblasMD.lib mean? What determines the order of symbol resolution? Why is it loading this particular symbol while processing libgslcblasMD.lib which is a 3rd party library? Or am I reading it wrong? It seems that the linker is going through the symbols referenced in the project's various object files, but I have no idea in what order. It then searches the static libraries the project uses - by project reference, explicit import and automatic default library imports; but it does so in an order that, again, seems arbitrary to me. When it finds a symbol, for example in geometry.lib, it then continues to find a bunch of other symbols from the same lib: 1> Searching V:\Src\Solutions\\..\..\\Common\Win32\Lib\PlxApi.lib: 1> Searching ..\..\..\..\out\win32\release\lib\camerageometry.lib: 1> Searching ..\..\..\..\out\win32\release\lib\geometry.lib: 1> Found "public: __thiscall VisionMap::Geometry::Box2d::operator class VisionMap::Geometry::Box2DInt(void)const " (??BBox2d@Geometry@VisionMap@@QBE?AVBox2DInt@12@XZ) 1> Referenced in FocusDlg.obj 1> Loaded geometry.lib(Box2d.obj) 1>Processed /DEFAULTLIB:CGAL-vc80-mt.lib 1>Processed /DEFAULTLIB:boost_thread-vc80-mt-1_33_1.lib 1> Found "public: __thiscall VisionMap::Geometry::Box2DInt::Box2DInt(int,int,int,int)" (??0Box2DInt@Geometry@VisionMap@@QAE@HHHH@Z) 1> Referenced in FocusDlg.obj 1> Referenced in ImageView.obj 1> Referenced in geometry.lib(Box2d.obj) 1> Loaded geometry.lib(Box2DInt.obj) 1> Found "public: virtual __thiscall VisionMap::Geometry::Point3d::~Point3d(void)" (??1Point3d@Geometry@VisionMap@@UAE@XZ) 1> Referenced in GPSFrm.obj 1> Referenced in MainFrm.obj 1> Loaded geometry.lib(Point3d.obj) 1> Found "void __cdecl VisionMap::Geometry::serialize<class boost::archive::binary_oarchive>(class boost::archive::binary_oarchive &,class VisionMap::Geometry::Point3d &,unsigned int)" (??$serialize@Vbinary_oarchive@archive@boost@@@Geometry@VisionMap@@YAXAAVbinary_oarchive@archive@boost@@AAVPoint3d@01@I@Z) 1> Referenced in GPSFrm.obj 1> Referenced in MainFrm.obj 1> Loaded geometry.lib(GeometrySerializationImpl.obj) But then, for some reason, it goes on to find symbols that are defined in other libs, and returns to geometry later on (a bunch of times). So clearly it's not doing "look in geometry and load every symbol that's references in the project, and then continue to other libraries". But it's not clear to me what is the order of symbol lookup. And what's the deal with all those libraries being processed at the beginning of the linker's work, but not finding any symbols to load from them? Does this project really not use anything from msvcrt.lib, kernel32.lib? Seems unlikely. So basically I'm looking to decipher the underlying order in the linker's operation.

    Read the article

  • Foreach PHP Error

    - by Logan
    I am receiving the following foreach error on my PHP file and I have no idea how to fix it. Does anyone have any ideas? When I load the page I get this: Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 61 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Warning: Invalid argument supplied for foreach() in /home/mysite/public_html/merge/class/global_functions.php on line 89 Line 61 and 89 of my /class/global_functions.php are as followed: Here is my code from line 61 to line 98: foreach($GLOBALS['userpermbit'] as $v) { if(strstr($v['perm'],'|'.$pageperm_id[0]['id'].'|')) return true; } //if they dont have perms and we're not externally including functions return false if ($GLOBALS['external'] != true) return false; return true; } //FUNCTION: quick perm check using perm info from the onload perm check function stealthPermCheck($req) { #if theyre an admin give them perms if(@in_array($GLOBALS['user'][0]['id'], $GLOBALS['superAdmins'])) return true; if(!is_numeric($req)) { #if the req is numeric we need to match a title, not a permid. So try to do that foreach($GLOBALS['userpermbit'] as $v) { if(stristr($v['title'],$req)) return true; } }else{ #check if they have perms numerically if so return true foreach($GLOBALS['userpermbit'] as $v) { if(strstr($v['perm'],'|'.$req.'|')) return true; } } #if none of this returned true they dont have perms, return false return false; }

    Read the article

  • CodePlex Daily Summary for Saturday, February 19, 2011

    CodePlex Daily Summary for Saturday, February 19, 2011Popular ReleasesAdvanced Explorer for Wp7: Advanced Explorer for Wp7 Version 1.4 Test8: Added option to run under Lockscreen. Fixed a bug when you open a pdf/mobi file without starting adobe reader/amazon kindle first boost loading time for folders added \Windows directory (all devices) you can now interact with the filesystem while it is loading!Game Files Open - Map Editor: Game Files Open - Map Editor Beta 2 v1.0.0.0: The 2° beta release of the Map Editor, we have fixed a big bug of the files regen.Document.Editor: 2011.6: Whats new for Document.Editor 2011.6: New Left to Right and Left to Right support New Indent more/less support Improved Home tab Improved Tooltips/shortcut keys Minor Bug Fix's, improvements and speed upsCatel - WPF and Silverlight MVVM library: 1.2: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.2 =========== Release date: ============= 2011/02/17 Added/fixed: ============ (+) DataObjectBase now supports Isolated Storage out of the box: Person.Save(myStream) stores a whole object graph in Silverlight (+) DataObjectBase can now be converted to Json via Person.ToJson(); (+)...??????????: All-In-One Code Framework ??? 2011-02-18: ?????All-In-One Code Framework?2011??????????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????,?????AzureBingMaps??????,??Azure,WCF, Silverlight, Window Phone????????,????????????????????????。 ???: Windows Azure SQL Azure Windows Azure AppFabric Windows Live Messenger Connect Bing Maps ?????: ??????HTML??? ??Windows PC?Mac?Silverlight??? ??Windows Phone?Silverlight??? ?????:http://blog.csdn.net/sjb5201/archive/2011...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 29 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 29 (beta)New: Added VsTortoise Solution Explorer integration for Web Project Folder, Web Folder and Web Item. Fix: TortoiseProc was called with invalid parameters, when using TSVN 1.4.x or older #7338 (thanks psifive) Fix: Add-in does not work, when "TortoiseSVN/bin" is not added to PATH environment variable #7357 Fix: Missing error message when ...Sense/Net CMS - Enterprise Content Management: SenseNet 6.0.3 Community Edition: Sense/Net 6.0.3 Community Edition We are happy to introduce you the latest version of Sense/Net with integrated ECM Workflow capabilities! In the past weeks we have been working hard to migrate the product to .Net 4 and include a workflow framework in Sense/Net built upon Windows Workflow Foundation 4. This brand new feature enables developers to define and develop workflows, and supports users when building and setting up complicated business processes involving content creation and response...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...Terminals: Version 2 - RC1: The "Clean Install" will overwrite your log4net configuration (if you have one). If you run in a Portable Environment, you can use the "Clean Install" and target your portable folder. Tested and it works fine. Changes for this release: Re-worked on the Toolstip settings are done, just to avoid the vs.net clash with auto-generating files for .settings files. renamed it to .settings.config Packged both log4net and ToolStripSettings files into the installer Upgraded the version inform...AllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.Jint - Javascript Interpreter for .NET: Jint - 0.9.0: New CLR interoperability features Many bugfixesBuild Version Increment Add-In Visual Studio: Build Version Increment v2.4.11046.2045: v2.4.11046.2045 Fixes and/or Improvements:Major: Added complete support for VC projects including .vcxproj & .vcproj. All padding issues fixed. A project's assembly versions are only changed if the project has been modified. Minor Order of versioning style values is now according to their respective positions in the attributes i.e. Major, Minor, Build, Revision. Fixed issue with global variable storage with some projects. Fixed issue where if a project item's file does not exist, a ...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.1: Coding4Fun.Phone.Toolkit v1.1 release. Bug fixes and minor feature requests addedTV4Home - The all-in-one TV solution!: 0.1.0.0 Preview: This is the beta preview release of the TV4Home software.Finestra Virtual Desktops: 1.2: Fixes a few minor issues with 1.1 including the broken per-desktop backgrounds Further improves the speed of switching desktops A few UI performance improvements Added donations linksNuGet: NuGet 1.1: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. The URL to the package OData feed is: http://go.microsoft.com/fwlink/?LinkID=206669 To see the list of issues fixed in this release, visit this our issues list A...New ProjectsComplexityEvolution: Research projectCRM 2011 Metadata Browser: The CRM 2011 Metadata Browser is a Silverlight 4 application that is packaged as a Managed CRM Solution. This tool allows you to view metadata including Entities, Attributes and Relationships. The 2011 SOAP endpoint is used to connect to CRM using the Organization.svc/web serviceEFCFvsNH3: A sample project that shows the main differences between Entity Framework Code First and Nhibernate 3: -Mapping -Configuration -DB Initialization -Query API -Session & Transaction -ValidationE-Teacher for IELTS preparation: E-teacher helps IELTS students prepare for the IELTS Academic and General Training test. Qualified English Teachers can register to the e-community and helps candidates to understand what they really need to improve for the IELTS exam and how to reach for the maximum band score.FIM CM Extensions: Extensions for Forefront Identity Manager 2010 to enable integration between the FIM Service workflow and the FIM Certificate Management workflow. Game Files Open - Map Editor: This is a map editor for the metin2 clients, it's very simple edit or create a map with this tool.Garbage Collection Sample Code: Garbage collection sample code demonstrates the differences between the large and small object heaps. This code supports the blog post at http://www.deepcode.co.ukGardenersWorld: The aim of gardenersWorld community website is to provide a platform for budding gardenening enthusiasts, hobbyists and professionals to share information. Harvester - Debug Monitor for Log4Net and NLog: Harvester enables you to monitor all Win32 debug output from all applications running on your machine. Watch real time Log4Net and NLog output across multiple applications at the same time. Trace a call from client to server and back without having to look at multiple log files.Hjelp! Jeg skal ha farmakokinetikk-eksamen!: Sliter du med å pugge formler til farmakokinetikk-eksamenen? Da er redningen din her! :DMercury Business Framework: Mercury Business Framework is a project set up to define basic objects used by the vast majority of business and non business software. The idea is to define the low level objects required by most applications on the web and desktop.MyDistrictBuilder: MyDistrictBuilder allows anybody to build legislative districts and submit to the Florida House of Rep. It is built on Bing Maps, Silverlight and AZURE. Written in C#. It is written to allow anyone to adapt for any states census geography. www.floridaredistricting.cloudapp.netMySchoolApp: MySchoolApp is a customizable application written in Visual Basic and C# for the Windows Mobile Phone 7 platform using Visual Studio Professional 2010. The application combines links to RSS and Web sites about a school, and displays a map and local weather. Osm Parser Community Edition: Osm Parser parse highways in open street maps to generate routable roads network in spatialite.PRBox Cloud Website: This is the website base for PRBox.com SEO Reporter : open source search engine optimization software: SEO Reporter is an open source search engine optimization application for detecting HTML related SEO violations, gathering data about a Web page and analyzing its keywords strategy. It's a Windows navigation application developed in F#. Setting timeout for SharePoint 2010 Silverlight web part: This web part overwrites 5sec hard-coded timeout for SharePoint 2010 Silverlight web part.SharePoint 2010 Central Administration Automatic Resources Link Generator: This feature will automatically generate the resources links list (Quick Links) in your SharePoint 2010 Central Administration site making it easier for SharePoint Admins to navigate through the common Central Administration activities without populating it themselves - VS2010/c#SharePoint Holiday Loader: SharePoint Holiday Loader allows you to quickly import public holidays into a SharePoint calendar from the standard .HOL format.Sohu?????: ?????????WPF?????????????,????????????(??、??、???),??、??、???????,????????????,??????????????。 ??V1????????,V2?????????????????。SP2010 Form Manipulator: This project will hopefully make it easier to manipulate the list form in SharePoint 2010.SPRotator: A jQuery powered web part for SharePoint that cycles through any type of list.SQL Script to Create a Website Directory: Here you can download sql script to create a website directory using SQL Server. * This is only the easy directory sql script to develop your website. Directory software may publish in future.Sri Hits Zone: This is an online repository which used to share Sri Lankan music. This is to provide Sri Lankans who living abroad to being touches with Sri Lankan artist and their music. testz: testzTime domain dissipative acoustic problem: tddapWindows Azure Hosted Services VM Manager: Windows Azure Hosted Services VM Manager is a Windows Service that can manage the number of hosted services running in Azure by either a time based schedule or by CPU load. This allows your service to scale for either dynamic load or a known schedule. Z80TR: Z80TR

    Read the article

  • CodePlex Daily Summary for Friday, December 10, 2010

    CodePlex Daily Summary for Friday, December 10, 2010Popular ReleasesFree Silverlight & WPF Chart Control - Visifire: Visifire Silverlight, WPF Charts v3.6.5 Released: Hi, Today we are releasing final version of Visifire, v3.6.5 with the following new feature: * New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. You can visit Visifire documentation to know more. http://www.visifire.com/visifirechartsdocumentation.php Also this release includes few bug fixes: * Chart threw exception while adding new Axis in Chart using Vi...PHPExcel: PHPExcel 1.7.5 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.UserVoice Helper for WebMatrix: UserVoice Helper v0.9: This version will work with ASP.NET WebPages and ASP.NET MVC ApplicationsDNN Simple Article: DNNSimpleArticle Module V00.00.03: The initial release of the DNNSimpleArticle module (labelled V00.00.03) There are C# and VB versions of this module for this initial release. No promises that going forward there will be packages for both languages provided for future releases. This module provides the following functionality Create and display articles Display a paged list of articles Articles get created as DNN ContentItems Categorization provided through DNN Taxonomy SEO functionality for article display providi...UOB & ME: UOB_ME 2.5: latest versionCouchDB.NET: CouchDB.NET 0.1: CouchDB.NET ------- Libraries and providers to use CouchDB features from .NET This distribution includes the following projects: - MachineKeyGenerator: Command line tool to generate a machine key string for use in App.Config and Web.Config files. - CouchDB.NET: Library to facilitate the use of CouchDB features. It uses Hadi Hariri's EasyHttp library to communicate with the CouchDB server. More info at: https://github.com/hhariri/EasyHttp - CouchDb.ASP.NET: ASP.NET Membership Provider and ASP...AutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.SuperSocket, an extensible socket application framework: SuperSocket 1.0 SP1: Fixed bugs: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration valueCslaGenFork: CslaGenFork 4.0 CTP 2: The version is 4.0.1 CTP2 and was released 2010 December 7 and includes the following files: CslaGenFork 4.0.1-2010-12-07 Setup.msi Templates-2010-10-07.zip For getting started instructions, refer to How to section. Overview of the changes Since CTP1 there were 53 work items closed (28 features, 24 issues and 1 task). During this 60 days a lot of work has been done on several areas. First the stereotypes: EditableRoot is OK EditableChild is OK EditableRootCollection is OK Editable...My Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.EnhSim: EnhSim 2.2.0 ALPHA: 2.2.0 ALPHAThis release adds in the changes for 4.03a. at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Updated En...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...MiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????New ProjectsAccountingGuid: for testing onlyChinese Nag Screen: This is a simple but effective program for learning to recognize Mandarin characters. The application sits in the system tray and displays a character random through your day. You can only get rid of it by typing in the pinyin.CouchDB.NET: .NET libraries to use CouchDB from .NET. Included are Membership and Roles provider so that you may use CouchDB as your integrated DB backend on your ASP.NET projects. Please see the readme.txt file for instructions.DataSetMapper: The idea behind DataSetMapper is to provide support for the automatic mapping of legacy DataSet based structures to proper domain objects. In essence the aim is to create the Mapping aspect of an ORM without the persistence concerns.EasyXnaAudio: EasyXnaAudio is a simple component for use in XNA Game Studio 3.1/4.0 projects that provides an easy interface to load, play, and manage songs and sounds in your game.FixMailboxSD - Exchange Mailbox Security Descriptor Canonicalizer: This is a small utility to fix mailbox security descriptors in Microsoft Exchange that have become non-canonical. It must be run on a machine with Exchange System Manager for Exchange 2003 installed, but it will work against mailboxes on 2003 or 2007 (not 2010).GearSynth Plugin: a plugin for graphsynth that makes gear trainsGroceryList: TBD with first versionIBMS Suite Build on the Associate Platform: A new way of approaching Information Systems. From the UI, users of the IS will be able to build and manipulate the IS to whatever way fits their needs. We have simplified development, removed the chasm between management and IT and give the power of simplification to the user!Ivy Nasha Framework: A PHP FrameworkjQuery helpers for ASP.NET and ASP.NET MVC: jQuery helpers makes it easier for ASP.NET developers to build jQuery scripts. It's developed in C#. JSTest.NET: JSTest.NET enabled JavaScript unit tests to be run directly in the test framework of your choice (MSTest, NUnit, xUnit, etc) and all without the need for a web browser. JSTest.NET utilizes the Windows Script Host (CScript) to run fast, fully debuggable JavaScript unit tests!Multicore Task Framework: MTF is a visual tool to simplify building robust component based .NET applications. MTF is designed to make full use of the power of multi-core processors.Nazha Script On DLR: NazhaPascalESE - a Delphi/Pascal class library for Microsoft ESENT database API: This pascal class library, primarily written for Delphi's Object Pascal, provides a lightweight and easy-to-use wrapper around the ESENT API. Perpetuum Hangar: A Character planner for the online game "Perpetuum"Projeto Exemplo: Projeto exemplo para a atividade 3 da disciplina.PSiteCode: PSiteCode Manager rScript Engine: rScript scripting engine is a managed script engine wrote in C# that supports Visual Basic and C# syntax based scripts. It provides Type's for dynamically getting and setting properties, invoking methods and run-time compilation of scripts.SharePoint 2010 User Profile WebPart: This webpart shows all user profile properties and values of the properties for a particular user profile. The results are shown in a table containing the display and technical name together with the user value.SHC: shriSHMTools: SHMTools is set of compatible software tools (mostly Matlab based) for structural health monitoring (SHM) research. This includes algorithms for system design, modeling, data acquisition, feature extraction, classification, and prognosis.SwapWin: SwapWin is a tiny and handy tool which swaps windows on different screens. Developed in C# and .NET 3.5.Teachers Diary: Teachers diary is application realizing electronic teacher's notepad with student marks. Current localization of the application is in czech language only.VkApp: Vk app for downloadingWebSpirit: A lightweighted web server implemented by C# which supports sufficient extendible feature. By zjuWPF & MEF Studio: WPF & MEF Studio

    Read the article

  • Cannot redeclare class error when generating PHPUnit code coverage report

    - by Cobby
    Starting a project with Zend Framework 1.10 and Doctrine 2 (Beta1). I am using namespaces in my own library code. When generating code coverage reports I get a Fatal Error about Redeclaring a class. To provide more info, I've commented out the xdebug_disable() call in my phpunit executable so you can see the function trace (disabled local variables output because there was too much output). Here's my Terminal output: $ phpunit PHPUnit 3.4.12 by Sebastian Bergmann. ........ Time: 4 seconds, Memory: 16.50Mb OK (8 tests, 14 assertions) Generating code coverage report, this may take a moment.PHP Fatal error: Cannot redeclare class Cob\Application\Resource\HelperBroker in /Users/Cobby/Sites/project/trunk/code/library/Cob/Application/Resource/HelperBroker.php on line 93 PHP Stack trace: PHP 1. {main}() /usr/local/zend/bin/phpunit:0 PHP 2. PHPUnit_TextUI_Command::main() /usr/local/zend/bin/phpunit:54 PHP 3. PHPUnit_TextUI_Command-run() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:146 PHP 4. PHPUnit_TextUI_TestRunner-doRun() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:213 PHP 5. PHPUnit_Util_Report::render() /usr/local/zend/share/pear/PHPUnit/TextUI/TestRunner.php:478 PHP 6. PHPUnit_Framework_TestResult-getCodeCoverageInformation() /usr/local/zend/share/pear/PHPUnit/Util/Report.php:97 PHP 7. PHPUnit_Util_Filter::getFilteredCodeCoverage() /usr/local/zend/share/pear/PHPUnit/Framework/TestResult.php:623 Fatal error: Cannot redeclare class Cob\Application\Resource\HelperBroker in /Users/Cobby/Sites/project/trunk/code/library/Cob/Application/Resource/HelperBroker.php on line 93 Call Stack: 0.0004 322888 1. {main}() /usr/local/zend/bin/phpunit:0 0.0816 4114628 2. PHPUnit_TextUI_Command::main() /usr/local/zend/bin/phpunit:54 0.0817 4114964 3. PHPUnit_TextUI_Command-run() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:146 0.1151 5435528 4. PHPUnit_TextUI_TestRunner-doRun() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:213 4.2931 16690760 5. PHPUnit_Util_Report::render() /usr/local/zend/share/pear/PHPUnit/TextUI/TestRunner.php:478 4.2931 16691120 6. PHPUnit_Framework_TestResult-getCodeCoverageInformation() /usr/local/zend/share/pear/PHPUnit/Util/Report.php:97 4.2931 16691148 7. PHPUnit_Util_Filter::getFilteredCodeCoverage() /usr/local/zend/share/pear/PHPUnit/Framework/TestResult.php:623 (I have no idea why it shows the error twice...?) And here is my phpunit.xml: <phpunit bootstrap="./code/tests/application/bootstrap.php" colors="true"> <!-- bootstrap.php changes directory to trunk/code/tests, all paths below are relative to this directory. --> <testsuite name="My Promotions"> <directory>./</directory> </testsuite> <filter> <whitelist> <directory suffix=".php">../application</directory> <directory suffix=".php">../library/Cob</directory> <exclude> <!-- By adding the below line I can remove the error --> <file>../library/Cob/Application/Resource/HelperBroker.php</file> <directory suffix=".phtml">../application</directory> <directory suffix=".php">../application/doctrine</directory> <file>../application/Bootstrap.php</file> <directory suffix=".php">../library/Cob/Tools</directory> </exclude> </whitelist> </filter> <logging> <log type="junit" target="../../build/reports/tests/report.xml" /> <log type="coverage-html" target="../../build/reports/coverage" charset="UTF-8" yui="true" highlight="true" lowUpperBound="50" highLowerBound="80" /> </logging> </phpunit> I have added a tag inside the which seams to hide this problem. I do have another application resource but it doesn't seam to have a problem (the other one is a Doctrine 2 resource). I'm not sure why it is specific to this class, my entire library is autoloaded so their isn't any include/require calls anywhere. I guess it should be noted that HelperBroker is the first file in the filesystem stemming out from library/Cob I am on Snow Leopard with the latest/recent versions of all software (Zend Server, Zend Framework, Doctrine 2 Beta1, Phing, PHPUnit, PEAR).

    Read the article

  • Building The Right SharePoint Team For Your Organization

    - by Mark Rackley
    I see the question posted fairly often asking what kind SharePoint team an organization should have. How many people do I need? What roles do I need to fill? What is best for my organization? Well, just like every other answer in SharePoint, the correct answer is “it depends”. Do you ever get sick of hearing that??? I know I do… So, let me give you my thoughts and opinions based upon my experience and what I’ve seen and let you come to your own conclusions. What are the possible SharePoint roles? I guess the first thing you need to understand are the different roles that exist in SharePoint (and their are LOTS). Remember, SharePoint is a massive beast and you will NOT find one person who can do it all. If you are hoping to find that person you will be sorely disappointed. For the most part this is true in SharePoint 2007 and 2010. However, generally things are improved in 2010 and easier for junior individuals to grasp. SharePoint Administrator The absolutely positively only role that you should not be without no matter the size of your organization or SharePoint deployment is a SharePoint administrator. These guys are essential to keeping things running and figuring out what’s wrong when things aren’t running well. These unsung heroes do more before 10 am than I do all day. The bad thing is, when these guys are awesome, you don’t even know they exist because everything is running so smoothly. You should definitely invest some time and money here to make sure you have some competent if not rockstar help. You need an admin who truly loves SharePoint and will go that extra mile when necessary. Let me give you a real world example of what I’m talking about: We have a rockstar admin… and I’m sure she’s sick of my throwing her name around so she’ll just have to live with remaining anonymous in this post… sorry Lori… Anyway! A couple of weeks ago our Server teams came to us and said Hi Lori, I’m finalizing the MOSS servers and doing updates that require a restart; can I restart them? Seems like a harmless request from your server team does it not? Sure, go ahead and apply the patches and reboot during our scheduled maintenance window. No problem? right? Sounded fair to me… but no…. not to our fearless SharePoint admin… I need a complete list of patches that will be applied. There is an update that is out there that will break SharePoint… KB973917 is the patch that has been shown to cause issues. What? You mean Microsoft released a patch that would actually adversely affect SharePoint? If we did NOT have a rockstar admin, our server team would have applied these patches and then when some problem occurred in SharePoint we’d have to go through the fun task of tracking down exactly what caused the issue and resolve it. How much time would that have taken? If you have a junior SharePoint admin or an admin who’s not out there staying on top of what’s going on you could have spent days tracking down something so simple as applying a patch you should not have applied. I will even go as far to say the only SharePoint rockstar you NEED in your organization is a SharePoint admin. You can always outsource really complicated development projects or bring in a rockstar contractor every now and then to make sure you aren’t way off track in other areas. For your day-to-day sanity and to keep SharePoint running smoothly, you need an awesome Admin. Some rockstars in this category are: Ben Curry, Mike Watson, Joel Oleson, Todd Klindt, Shane Young, John Ferringer, Sean McDonough, and of course Lori Gowin. SharePoint Developer Another essential role for your SharePoint deployment is a SharePoint developer. Things do start to get a little hazy here and there are many flavors of “developers”. Are you writing custom code? using SharePoint Designer? What about SharePoint Branding?  Are all of these considered developers? I would say yes. Are they interchangeable? I’d say no. Development in SharePoint is such a large beast in itself. I would say that it’s not so large that you can’t know it all well, but it is so large that there are many people who specialize in one particular category. If you are lucky enough to have someone on staff who knows it all well, you better make sure they are well taken care of because those guys are ready-made to move over to a consulting role and charge you 3 times what you are probably paying them. :) Some of the all-around rockstars are Eric Shupps, Andrew Connell (go Razorbacks), Rob Foster, Paul Schaeflein, and Todd Bleeker SharePoint Power User/No-Code Solutions Developer These SharePoint Swiss Army Knives are essential for quick wins in your organization. These people can twist the out-of-the-box functionality to make it do things you would not even imagine. Give these guys SharePoint Designer, jQuery, InfoPath, and a little time and they will create views, dashboards, and KPI’s that will blow your mind away and give your execs the “wow” they are looking for. Not only can they deliver that wow factor, but they can mashup, merge, and really help make your SharePoint application usable and deliver an overall better user experience. Before you hand off a project to your SharePoint Custom Code developer, let one of these rockstars look at it and show you what they can do (in probably less time). I would say the second most important role you can fill in your organization is one of these guys. Rockstars in this category are Christina Wheeler, Laura Rogers, Jennifer Mason, and Mark Miller SharePoint Developer – Custom Code If you want to really integrate SharePoint into your legacy systems, or really twist it and make it bend to your will, you are going to have to open up Visual Studio and write some custom code.  Remember, SharePoint is essentially just a big, huge, ginormous .NET application, so you CAN write code to make it do ANYTHING, but do you really want to spend the time and effort to do so? At some point with every other form of SharePoint development you are going to run into SOME limitation (SPD Workflows is the big one that comes to mind). If you truly want to knock down all the walls then custom development is the way to go. PLEASE keep in mind when you are looking for a custom code developer that a .NET developer does NOT equal a SharePoint developer. Just SOME of the things these guys write are: Custom Workflows Custom Web Parts Web Service functionality Import data from legacy systems Export data to legacy systems Custom Actions Event Receivers Service Applications (2010) These guys are also the ones generally responsible for packaging everything up into solution packages (you are doing that, right?). Rockstars in this category are Phil Wicklund, Christina Wheeler, Geoff Varosky, and Brian Jackett. SharePoint Branding “But it LOOKS like SharePoint!” Somebody call the WAAAAAAAAAAAAHMbulance…   Themes, Master Pages, Page Layouts, Zones, and over 2000 styles in CSS.. these guys not only have to be comfortable with all of SharePoint’s quirks and pain points when branding, but they have to know it TWICE for publishing and non-publishing sites.  Not only that, but these guys really need to have an eye for graphic design and be able to translate the ramblings of business into something visually stunning. They also have to be comfortable with XSLT, XML, and be able to hand off what they do to your custom developers for them to package as solutions (which you are doing, right?). These rockstars include Heater Waterman, Cathy Dew, and Marcy Kellar SharePoint Architect SharePoint Architects are generally SharePoint Admins or Developers who have moved into more of a BA role? Is that fair to say? These guys really have a grasp and understanding for what SharePoint IS and what it can do. These guys help you structure your farms to meet your needs and help you design your applications the correct way. It’s always a good idea to bring in a rockstar SharePoint Architect to do a sanity check and make sure you aren’t doing anything stupid.  Most organizations probably do not have a rockstar architect on staff. These guys are generally brought in at the deployment of a farm, upgrade of a farm, or for large development projects. I personally also find architects very useful for sitting down with the business to translate their needs into what SharePoint can do. A good architect will be able to pick out what can be done out-of-the-box and what has to be custom built and hand those requirements to the development Staff. Architects can generally fill in as an admin or a developer when needed. Some rockstar architects are Rick Taylor, Dan Usher, Bill English, Spence Harbar, Neil Hodgkins, Eric Harlan, and Bjørn Furuknap. Other Roles / Specialties On top of all these other roles you also get these people who specialize in things like Reporting, BDC (BCS in 2010), Search, Performance, Security, Project Management, etc... etc... etc... Again, most organizations will not have one of these gurus on staff, they’ll just pay out the nose for them when they need them. :) SharePoint End User Everyone else in your organization that touches SharePoint falls into this category. What they actually DO in SharePoint is determined by your governance and what permissions you give these guys. Hopefully you have these guys on a fairly short leash and are NOT giving them access to tools like SharePoint Designer. Sadly end users are the ones who truly make your deployment a success by using it, but are also your biggest enemy in breaking it.  :)  We love you guys… really!!! Okay, all that’s fine and dandy, but what should MY SharePoint team look like? It depends! Okay… Are you just doing out of the box team sites with no custom development? Then you are probably fine with a great Admin team and a great No-Code Solution Development team. How many people do you need? Depends on how busy you can keep them. Sorry, can’t answer the question about numbers without knowing your specific needs. I can just tell you who you MIGHT need and what they will do for you. I’ll leave you with what my ideal SharePoint Team would look like for a particular scenario: Farm / Organization Structure Dev, QA, and 2 Production Farms. 5000 – 10000 Users Custom Development and Integration with legacy systems Team Sites, My Sites, Intranet, Document libraries and overall company collaboration Team Rockstar SharePoint Administrator 2-3 junior SharePoint Administrators SharePoint Architect / Lead Developer 2 Power User / No-Code Solution Developers 2-3 Custom Code developers Branding expert With a team of that size and skill set, they should be able to keep a substantial SharePoint deployment running smoothly and meet your business needs. This does NOT mean that you would not need to bring in contract help from time to time when you need an uber specialist in one area. Also, this team assumes there will be ongoing development for the life of your SharePoint farm. If you are just going to be doing sporadic custom development, it might make sense to partner with an awesome firm that specializes in that sort of work (I can give you the name of a couple if you are interested).  Again though, the size of your team depends on the number of requests you are receiving and how much active deployment you are doing. So, don’t bring in a team that looks like this and then yell at me because they are sitting around with nothing to do or are so overwhelmed that nothing is getting done. I do URGE you to take the proper time to asses your needs and determine what team is BEST for your organization. Also, PLEASE PLEASE PLEASE do not skimp on the talent. When it comes to SharePoint you really do get what you pay for when it comes to employees, contractors, and software.  SharePoint can become absolutely critical to your business and because you skimped on hiring a developer he created a web part that brings down the farm because he doesn’t know what he’s doing, or you hire an admin who thinks it’s fine to stick everything in the same Content Database and then can’t figure out why people are complaining. SharePoint can be an enormous blessing to an organization or it’s biggest curse. Spend the time and money to do it right, or be prepared to spending even more time and money later to fix it.

    Read the article

  • Jquery .ajax async postback on C# UserControl

    - by tnriverfish
    I'm working on adding a todo list to a project system and would like to have the todo creation trigger a async postback to update the database. I'd really like to host this in a usercontrol so I can drop the todo list onto a project page, task page or stand alone todo list page. Here's what I have. User Control "TodoList.ascx" which lives in the Controls directory. The script that sits at the top of the UserControl. You can see where I started building jsonText to postback but when that didn't work I just tried posting back an empty data variable and removed the 'string[] items' variable from the AddTodo2 method. <script type="text/javascript"> $(document).ready(function() { // Add the page method call as an onclick handler for the div. $("#divAddButton").click(function() { var jsonText = JSON.stringify({ tdlId: 1, description: "test test test" }); //data: jsonText, $.ajax({ type: "POST", url: "TodoList.aspx/AddTodo2", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { alert('retrieved'); $("#divAddButton").text(msg.d); }, error: function() { alert("error"); } }); }); });</script> The rest of the code on the ascx. <div class="divTodoList"> <asp:PlaceHolder ID="phTodoListCreate" runat="server"> <div class="divTLDetail"> <div>Description</div> <div><asp:TextBox ID="txtDescription" runat="server"></asp:TextBox></div> <div>Active</div> <div><asp:CheckBox ID="cbActive" runat="server" /></div> <div>Access Level</div> <div><asp:DropDownList ID="ddlAccessLevel" runat="server"></asp:DropDownList></div> </div> </asp:PlaceHolder> <asp:PlaceHolder ID="phTodoListDisplayHeader" runat="server"> <div id="divTLHeader"> <asp:HyperLink ID="hlHeader" runat="server"></asp:HyperLink> </div> </asp:PlaceHolder> <asp:PlaceHolder ID="phTodoListItems" runat="server"> <div class="divTLItems> <asp:Literal ID="litItems" runat="server"></asp:Literal> </div> </asp:PlaceHolder> <asp:PlaceHolder ID="phAddTodo" runat="server"> <div class="divTLAddItem"> <div id="divAddButton">Add Todo</div> <div id="divAddText"><asp:TextBox ID="txtNewTodo" runat="server"></asp:TextBox></div> </div> </asp:PlaceHolder> <asp:Label ID="lbTodoListId" runat="server" style="display:none;"></asp:Label></div> To test the idea I created a /TodoList.aspx page that lives in the root directory. <uc1:TodoList runat="server" ID="tdl1" TodoListId="1" ></uc1:TodoList> The cs for the todolist.aspx protected void Page_Load(object sender, EventArgs e) { SecurityManager sm = new SecurityManager(); sm.MemberLevelAccessCheck(MemberLevelKey.AreaAdmin); } public static string AddTodo2() { return "yea!"; } My hope is that I can have a control that can be used to display multiple todo lists and create a brand new todo list as well. When I click on the #divAddButton I can watch it build the postback in firebug but once it completes it runs the error portion by alerting 'error'. I can't see why. I'd really rather have the response method live inside the user control as well. Since I'll be dropping it on several pages to keep from having to go put a method on each individual page. Any help would be appreciated.

    Read the article

  • Batch break up after call other batch file

    - by Sven Arno Jopen
    i have the problem, that my batch process breaking up each time after call a other batch file. The batch files are used to run a make process out from IBM Rhapsody. There convert the call from Rhapsody to the Visual Studio tools. So nmake will be called from the batch after make different settings. The scripts aren’t written completely from me, I only adapt theme to run under both windows architecture versions, x86 and x64. The first script (vs2005_make.bat) will be called from Rhapsody and run to the “call” statement. The second script (Vcvars_VisualStudio2005.bat) runs to the end. But the first script isn’t resume working, at this point the process break up without a error message. I'm not very familiar with batch files, this is the first time I make more than simple console commands in a batch file. So I hope I have given all information’s which needed, otherwise ask me. Here the start script (vs2005_make.bat): :: parameter 1 - Makefile which should be used :: parameter 2 - The make target mark @echo off IF "%2"=="" set target=all IF "%2"=="all" set target=all IF "%2"=="build" set target=all IF "%2"=="rebuild" set target=clean all IF "%2"=="clean" set target=clean set RegQry="HKLM\Hardware\Description\System\CentralProcessor\0" REG.exe Query %RegQry% > checkOS.txt Find /i "x86" < CheckOS.txt > StringCheck.txt IF %ERRORLEVEL%==0 ( set arch=x86 ) ELSE ( set arch=x64 ) call "%ProgramFiles%\IBM\Rhapsody752\Share\etc\Vcvars_VisualStudio2005.bat" %arch% IF %ERRORLEVEL%==0 ( set makeflags= nmake /nologo /S /F %1 %target% ) del checkOS.txt del StringCheck.txt exit and here the called script (Vcvars_VisualStudio2005.bat): :: param 1 - Processor architecture @echo off ECHO param 1 = %1 IF %1==x86 ( SET ProgrammPath=%ProgramFiles% ) ELSE IF %1==x64 ( SET ProgrammPath=%ProgramFiles(x86)% ) ELSE ( ECHO Unknowen architectur EXIT /B 1 ) SET VSINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8\Common7\IDE" SET VCINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8" SET FrameworkDir=C:\WINDOWS\Microsoft.NET\Framework SET FrameworkVersion=v2.0.50727 SET FrameworkSDKDir="%ProgrammPath%\Microsoft Visual Studio 8\SDK\v2.0" rem Root of Visual Studio common files. IF %VSINSTALLDIR%=="" GOTO Usage IF %VCINSTALLDIR%=="" SET VCINSTALLDIR=%VSINSTALLDIR% rem rem Root of Visual Studio ide installed files. rem SET DevEnvDir=%VSINSTALLDIR% rem rem Root of Visual C++ installed files. rem SET MSVCDir=%VCINSTALLDIR%\VC SET PATH=%DevEnvDir%;%MSVCDir%\BIN;%VCINSTALLDIR%\Common7\Tools;%VCINSTALLDIR%Common7 \Tools\bin\prerelease;%VCINSTALLDIR%\Common7\Tools\bin;%FrameworkSDKDir%\bin;%FrameworkDir%\%FrameworkVersion%;%PATH%; SET INCLUDE=%MSVCDir%\ATLMFC\INCLUDE;%MSVCDir%\INCLUDE;%MSVCDir%\PlatformSDK\include\gl;%MSVCDir%\PlatformSDK\include;%FrameworkSDKDir%\include;%INCLUDE% SET LIB=%MSVCDir%\ATLMFC\LIB;%MSVCDir%\LIB;%MSVCDir%\PlatformSDK\lib;%FrameworkSDKDir%\lib;%LIB% GOTO end :Usage ECHO. VSINSTALLDIR variable is not set. ECHO. ECHO SYNTAX: %0 GOTO end :end Here the console output, what I not understand here and find very suspicious is that after the “IF %ERRORLEVEL%” Statement in the first script all will be put out regardless the echo is set to off… Executing: "C:\Programme\IBM\Rhapsody752\Share\etc\vs2005_make.bat" Simulation.mak build IF %ERRORLEVEL%==0 ( Mehr? set arch=x86 Mehr? ) ELSE ( Mehr? set arch=x64 Mehr? ) call "%ProgramFiles%\IBM\Rhapsody752\Share\etc\Vcvars_VisualStudio2005.bat" %arch% :: param 1 - Processor architecture @echo off ECHO param 1 = %1 param 1 = x86 IF %1==x86 ( Mehr? SET ProgrammPath=%ProgramFiles% Mehr? ) ELSE IF %1==x64 ( Mehr? SET ProgrammPath=%ProgramFiles(x86)% Mehr? ) ELSE ( Mehr? ECHO Unknowen architectur Mehr? EXIT /B 1 Mehr? ) SET VSINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8\Common7\IDE" SET VCINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8" SET FrameworkDir=C:\WINDOWS\Microsoft.NET\Framework SET FrameworkVersion=v2.0.50727 SET FrameworkSDKDir="%ProgrammPath%\Microsoft Visual Studio 8\SDK\v2.0" rem Root of Visual Studio common files. IF %VSINSTALLDIR%=="" GOTO Usage IF %VCINSTALLDIR%=="" SET VCINSTALLDIR=%VSINSTALLDIR% rem rem Root of Visual Studio ide installed files. rem SET DevEnvDir=%VSINSTALLDIR% rem rem Root of Visual C++ installed files. rem SET MSVCDir=%VCINSTALLDIR%\VC SET PATH=%DevEnvDir%;%MSVCDir%\BIN;%VCINSTALLDIR%\Common7\Tools;%VCINSTALLDIR%\Common7\Tools\bin\prerelease;%VCINSTALLDIR%\Common7\Tools\bin;%FrameworkSDKDir%\bin;%FrameworkDir%\%FrameworkVersion%;%PATH%; SET INCLUDE=%MSVCDir%\ATLMFC\INCLUDE;%MSVCDir%\INCLUDE;%MSVCDir%\PlatformSDK\include\gl;%MSVCDir%\PlatformSDK\include;%FrameworkSDKDir%\include;%INCLUDE% SET LIB=%MSVCDir%\ATLMFC\LIB;%MSVCDir%\LIB;%MSVCDir%\PlatformSDK\lib;%FrameworkSDKDir%\lib;%LIB% GOTO end Build Done I hope someone have an idea, i work now since two days and don't find the error... thanking you in anticipation. Note: The word “Mehr?” in the text output is german and means “more”. I don't know were it comes from and it is possible that it is a bad translation from the English output to german.

    Read the article

  • Metro: Introduction to CSS 3 Grid Layout

    - by Stephen.Walther
    The purpose of this blog post is to provide you with a quick introduction to the new W3C CSS 3 Grid Layout standard. You can use CSS Grid Layout in Metro style applications written with JavaScript to lay out the content of an HTML page. CSS Grid Layout provides you with all of the benefits of using HTML tables for layout without requiring you to actually use any HTML table elements. Doing Page Layouts without Tables Back in the 1990’s, if you wanted to create a fancy website, then you would use HTML tables for layout. For example, if you wanted to create a standard three-column page layout then you would create an HTML table with three columns like this: <table height="100%"> <tr> <td valign="top" width="300px" bgcolor="red"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </td> <td valign="top" bgcolor="green"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </td> <td valign="top" width="300px" bgcolor="blue"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </td> </tr> </table> When the table above gets rendered out to a browser, you end up with the following three-column layout: The width of the left and right columns is fixed – the width of the middle column expands or contracts depending on the width of the browser. Sometime around the year 2005, everyone decided that using tables for layout was a bad idea. Instead of using tables for layout — it was collectively decided by the spirit of the Web — you should use Cascading Style Sheets instead. Why is using HTML tables for layout bad? Using tables for layout breaks the semantics of the TABLE element. A TABLE element should be used only for displaying tabular information such as train schedules or moon phases. Using tables for layout is bad for accessibility (The Web Content Accessibility Guidelines 1.0 is explicit about this) and using tables for layout is bad for separating content from layout (see http://CSSZenGarden.com). Post 2005, anyone who used HTML tables for layout were encouraged to hold their heads down in shame. That’s all well and good, but the problem with using CSS for layout is that it can be more difficult to work with CSS than HTML tables. For example, to achieve a standard three-column layout, you either need to use absolute positioning or floats. Here’s a three-column layout with floats: <style type="text/css"> #container { min-width: 800px; } #leftColumn { float: left; width: 300px; height: 100%; background-color:red; } #middleColumn { background-color:green; height: 100%; } #rightColumn { float: right; width: 300px; height: 100%; background-color:blue; } </style> <div id="container"> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> </div> The page above contains four DIV elements: a container DIV which contains a leftColumn, middleColumn, and rightColumn DIV. The leftColumn DIV element is floated to the left and the rightColumn DIV element is floated to the right. Notice that the rightColumn DIV appears in the page before the middleColumn DIV – this unintuitive ordering is necessary to get the floats to work correctly (see http://stackoverflow.com/questions/533607/css-three-column-layout-problem). The page above (almost) works with the most recent versions of most browsers. For example, you get the correct three-column layout in both Firefox and Chrome: And the layout mostly works with Internet Explorer 9 except for the fact that for some strange reason the min-width doesn’t work so when you shrink the width of your browser, you can get the following unwanted layout: Notice how the middle column (the green column) bleeds to the left and right. People have solved these issues with more complicated CSS. For example, see: http://matthewjamestaylor.com/blog/holy-grail-no-quirks-mode.htm But, at this point, no one could argue that using CSS is easier or more intuitive than tables. It takes work to get a layout with CSS and we know that we could achieve the same layout more easily using HTML tables. Using CSS Grid Layout CSS Grid Layout is a new W3C standard which provides you with all of the benefits of using HTML tables for layout without the disadvantage of using an HTML TABLE element. In other words, CSS Grid Layout enables you to perform table layouts using pure Cascading Style Sheets. The CSS Grid Layout standard is still in a “Working Draft” state (it is not finalized) and it is located here: http://www.w3.org/TR/css3-grid-layout/ The CSS Grid Layout standard is only supported by Internet Explorer 10 and there are no signs that any browser other than Internet Explorer will support this standard in the near future. This means that it is only practical to take advantage of CSS Grid Layout when building Metro style applications with JavaScript. Here’s how you can create a standard three-column layout using a CSS Grid Layout: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } </style> </head> <body> <div id="container"> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> </div> </body> </html> When the page above is rendered in Internet Explorer 10, you get a standard three-column layout: The page above contains four DIV elements: a container DIV which contains a leftColumn DIV, middleColumn DIV, and rightColumn DIV. The container DIV is set to Grid display mode with the following CSS rule: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } The display property is set to the value “-ms-grid”. This property causes the container DIV to lay out its child elements in a grid. (Notice that you use “-ms-grid” instead of “grid”. The “-ms-“ prefix is used because the CSS Grid Layout standard is still preliminary. This implementation only works with IE10 and it might change before the final release.) The grid columns and rows are defined with the “-ms-grid-columns” and “-ms-grid-rows” properties. The style rule above creates a grid with three columns and one row. The left and right columns are fixed sized at 300 pixels. The middle column sizes automatically depending on the remaining space available. The leftColumn, middleColumn, and rightColumn DIVs are positioned within the container grid element with the following CSS rules: #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } The “-ms-grid-column” property is used to specify the column associated with the element selected by the style sheet selector. The leftColumn DIV is positioned in the first grid column, the middleColumn DIV is positioned in the second grid column, and the rightColumn DIV is positioned in the third grid column. I find using CSS Grid Layout to be just as intuitive as using an HTML table for layout. You define your columns and rows and then you position different elements within these columns and rows. Very straightforward. Creating Multiple Columns and Rows In the previous section, we created a super simple three-column layout. This layout contained only a single row. In this section, let’s create a slightly more complicated layout which contains more than one row: The following page contains a header row, a content row, and a footer row. The content row contains three columns: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } #leftColumn { -ms-grid-column: 1; -ms-grid-row: 2; background-color:red; } #middleColumn { -ms-grid-column: 2; -ms-grid-row: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; -ms-grid-row: 2; background-color:blue; } #footer { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 3; background-color: orange; } </style> </head> <body> <div id="container"> <div id="header"> Header, Header, Header </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="footer"> Footer, Footer, Footer </div> </div> </body> </html> In the page above, the grid layout is created with the following rule which creates a grid with three rows and three columns: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } The header is created with the following rule: #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } The header is positioned in column 1 and row 1. Furthermore, notice that the “-ms-grid-column-span” property is used to span the header across three columns. CSS Grid Layout and Fractional Units When you use CSS Grid Layout, you can take advantage of fractional units. Fractional units provide you with an easy way of dividing up remaining space in a page. Imagine, for example, that you want to create a three-column page layout. You want the size of the first column to be fixed at 200 pixels and you want to divide the remaining space among the remaining three columns. The width of the second column is equal to the combined width of the third and fourth columns. The following CSS rule creates four columns with the desired widths: #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } The fr unit represents a fraction. The grid above contains four columns. The second column is two times the size (2fr) of the third (1fr) and fourth (1fr) columns. When you use the fractional unit, the remaining space is divided up using fractional amounts. Notice that the single row is set to a height of 1fr. The single grid row gobbles up the entire vertical space. Here’s the entire HTML page: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } #firstColumn { -ms-grid-column: 1; background-color:red; } #secondColumn { -ms-grid-column: 2; background-color:green; } #thirdColumn { -ms-grid-column: 3; background-color:blue; } #fourthColumn { -ms-grid-column: 4; background-color:orange; } </style> </head> <body> <div id="container"> <div id="firstColumn"> First Column, First Column, First Column </div> <div id="secondColumn"> Second Column, Second Column, Second Column </div> <div id="thirdColumn"> Third Column, Third Column, Third Column </div> <div id="fourthColumn"> Fourth Column, Fourth Column, Fourth Column </div> </div> </body> </html>   Summary There is more in the CSS 3 Grid Layout standard than discussed in this blog post. My goal was to describe the basics. If you want to learn more than you can read through the entire standard at http://www.w3.org/TR/css3-grid-layout/ In this blog post, I described some of the difficulties that you might encounter when attempting to replace HTML tables with Cascading Style Sheets when laying out a web page. I explained how you can take advantage of the CSS 3 Grid Layout standard to avoid these problems when building Metro style applications using JavaScript. CSS 3 Grid Layout provides you with all of the benefits of using HTML tables for laying out a page without requiring you to use HTML table elements.

    Read the article

  • How to debug manifest errors?

    - by Rryk
    I am creating an application that depends on third-party library, which in turn depends on MSVCP90D.dll (it was compiled with debug symbols). While starting the application it fails to start and provides an error message: I have found such library in C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\Debug_NonRedist\amd64\Microsoft.VC90.DebugCRT and C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\Debug_NonRedist\x86\Microsoft.VC90.DebugCRT. As you can see one of them is 64-bit, while the other is 32-bit. When I have placed 32-bit into the directory of the application the application silently crashes while loading (log from Visual Studio Output window is below). With the 32-bit one I get another error message: If I press Abort -- programs shuts down, Retry results in breaking into debug session for crt0msg.c file. This is system file and I have no idea how to debug it. If I press Ignore I get yet another error message: So the question is how to debug such errors? Please give me some links where I can read more about it or point me out what exactly I should do in such cases. I know this relates to manifest problems -- please give me a good resource where I can read about resources, since what I have found have confused me even more. This is log for 64-bit version of the MSVCP90D.dll library: 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.exe', Symbols loaded. 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ntdll.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\kernel32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\KernelBase.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\user32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\gdi32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\lpk.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\usp10.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\msvcrt.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\advapi32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\sechost.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\rpcrt4.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\sspicli.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\cryptbase.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\shell32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\shlwapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\winmm.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\version.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\psapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\imm32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\msctf.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.dll', Symbols loaded. 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\oleaut32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16385_none_421189da2b7fabfc\comctl32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\opengl32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\glu32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ddraw.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\dciman32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\setupapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\cfgmgr32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\devobj.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\dwmapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\secur32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\rtsg2\bin\RTSG2.dll', Symbols loaded. 'chrome.exe': Unloaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.dll' 'chrome.exe': Unloaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\rtsg2\bin\RTSG2.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\secur32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\opengl32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\ddraw.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\dwmapi.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\setupapi.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\devobj.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\cfgmgr32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\dciman32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\glu32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\oleacc.dll' 'chrome.exe': Unloaded 'C:\Windows\winsxs\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16385_none_421189da2b7fabfc\comctl32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\oleaut32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\ole32.dll' 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Symbols loaded (source information stripped). The program '[1152] chrome.exe: Native' has exited with code 9 (0x9).

    Read the article

  • Building a jQuery Plug-in to make an HTML Table scrollable

    - by Rick Strahl
    Today I got a call from a customer and we were looking over an older application that uses a lot of tables to display financial and other assorted data. The application is mostly meta-data driven with lots of layout formatting automatically driven through meta data rather than through explicit hand coded HTML layouts. One of the problems in this apps are tables that display a non-fixed amount of data. The users of this app don't want to use paging to see more data, but instead want to display overflow data using a scrollbar. Many of the forms are very densely populated, often with multiple data tables that display a few rows of data in the UI at the most. This sort of layout does not lend itself well to paging, but works much better with scrollable data. Unfortunately scrollable tables are not easily created. HTML Tables are mangy beasts as anybody who's done any sort of Web development knows. Tables are finicky when it comes to styling and layout, and they have many funky quirks, especially when it comes to scrolling both of the table rows themselves or even the child columns. There's no built-in way to make tables scroll and to lock headers while you do, and while you can embed a table (or anything really) into a scrolling div with something like this: <div style="position:relative; overflow: hidden; overflow-y: scroll; height: 200px; width: 400px;"> <table id="table" style="width: 100%" class="blackborder" > <thead> <tr class="gridheader"> <th>Column 1</th> <th>Column 2</th> <th>Column 3</th> <th >Column 4</th> </tr> </thead> <tbody> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> … </tbody> </table> </div> </div> that won't give a very satisfying visual experience: Both the header and body scroll which looks odd. You lose context as soon as the header scrolls off the top and when you reach the bottom of the list the bottom outline of the table shows which also looks off. The the side bar shows all the way down the length of the table yet another visual miscue. In a pinch this will work, but it's ugly. What's out there? Before we go further here you should know that there are a few capable grid plug-ins out there already. Among them: Flexigrid (can work of any table as well as with AJAX data) jQuery Scrollable Table Plug-in (feature similar to what I need but not quite) jqGrid (mostly an Ajax Grid which is very powerful and works very well) But in the end none of them fit the bill of what I needed in this situation. All of these require custom CSS and some of them are fairly complex to restyle. Others are AJAX only or work better with AJAX loaded data. However, I need to actually try (as much as possible) to maintain the original styling of the tables without requiring extensive re-styling. Building the makeTableScrollable() Plug-in To make a table scrollable requires rearranging the table a bit. In the plug-in I built I create two <div> tags and split the table into two: one for the table header and one for the table body. The bottom <div> tag then contains only the table's row data and can be scrolled while the header stays fixed. Using jQuery the basic idea is pretty simple: You create the divs, copy the original table into the bottom, then clone the table, clear all content append the <thead> section, into new table and then copy that table into the second header <div>. Easy as pie, right? Unfortunately it's a bit more complicated than that as it's tricky to get the width of the table right to account for the scrollbar (by adding a small column) and making sure the borders properly line up for the two tables. A lot of style settings have to be made to ensure the table is a fixed size, to remove and reattach borders, to add extra space to allow for the scrollbar and so forth. The end result of my plug-in is a table with a scrollbar. Using the same table I used earlier the result looks like this: To create it, I use the following jQuery plug-in logic to select my table and run the makeTableScrollable() plug-in against the selector: $("#table").makeTableScrollable( { cssClass:"blackborder"} ); Without much further ado, here's the short code for the plug-in: (function ($) { $.fn.makeTableScrollable = function (options) { return this.each(function () { var $table = $(this); var opt = { // height of the table height: "250px", // right padding added to support the scrollbar rightPadding: "10px", // cssclass used for the wrapper div cssClass: "" } $.extend(opt, options); var $thead = $table.find("thead"); var $ths = $thead.find("th"); var id = $table.attr("id"); var cssClass = $table.attr("class"); if (!id) id = "_table_" + new Date().getMilliseconds().ToString(); $table.width("+=" + opt.rightPadding); $table.css("border-width", 0); // add a column to all rows of the table var first = true; $table.find("tr").each(function () { var row = $(this); if (first) { row.append($("<th>").width(opt.rightPadding)); first = false; } else row.append($("<td>").width(opt.rightPadding)); }); // force full sizing on each of the th elemnts $ths.each(function () { var $th = $(this); $th.css("width", $th.width()); }); // Create the table wrapper div var $tblDiv = $("<div>").css({ position: "relative", overflow: "hidden", overflowY: "scroll" }) .addClass(opt.cssClass); var width = $table.width(); $tblDiv.width(width).height(opt.height) .attr("id", id + "_wrapper") .css("border-top", "none"); // Insert before $tblDiv $tblDiv.insertBefore($table); // then move the table into it $table.appendTo($tblDiv); // Clone the div for header var $hdDiv = $tblDiv.clone(); $hdDiv.empty(); var width = $table.width(); $hdDiv.attr("style", "") .css("border-bottom", "none") .width(width) .attr("id", id + "_wrapper_header"); // create a copy of the table and remove all children var $newTable = $($table).clone(); $newTable.empty() .attr("id", $table.attr("id") + "_header"); $thead.appendTo($newTable); $hdDiv.insertBefore($tblDiv); $newTable.appendTo($hdDiv); $table.css("border-width", 0); }); } })(jQuery); Oh sweet spaghetti code :-) The code starts out by dealing the parameters that can be passed in the options object map: height The height of the full table/structure. The height of the outside wrapper container. Defaults to 200px. rightPadding The padding that is added to the right of the table to account for the scrollbar. Creates a column of this width and injects it into the table. If too small the rightmost column might get truncated. if too large the empty column might show. cssClass The CSS class of the wrapping container that appears to wrap the table. If you want a border around your table this class should probably provide it since the plug-in removes the table border. The rest of the code is obtuse, but pretty straight forward. It starts by creating a new column in the table to accommodate the width of the scrollbar and avoid clipping of text in the rightmost column. The width of the columns is explicitly set in the header elements to force the size of the table to be fixed and to provide the same sizing when the THEAD section is moved to a new copied table later. The table wrapper div is created, formatted and the table is moved into it. The new wrapper div is cloned for the header wrapper and configured. Finally the actual table is cloned and cleared of all elements. The original table's THEAD section is then moved into the new table. At last the new table is added to the header <div>, and the header <div> is inserted before the table wrapper <div>. I'm always amazed how easy jQuery makes it to do this sort of re-arranging, and given of what's happening the amount of code is rather small. Disclaimer: Your mileage may vary A word of warning: I make no guarantees about the code above. It's a first cut and I provided this here mainly to demonstrate the concepts of decomposing and reassembling an HTML layout :-) which jQuery makes so nice and easy. I tested this component against the typical scenarios we plan on using it for which are tables that use a few well known styles (or no styling at all). I suspect if you have complex styling on your <table> tag that things might not go so well. If you plan on using this plug-in you might want to minimize your styling of the table tag and defer any border formatting using the class passed in via the cssClass parameter, which ends up on the two wrapper div's that wrap the header and body rows. There's also no explicit support for footers. I rarely if ever use footers (when not using paging that is), so I didn't feel the need to add footer support. However, if you need that it's not difficult to add - the logic is the same as adding the header. The plug-in relies on a well-formatted table that has THEAD and TBODY sections along with TH tags in the header. Note that ASP.NET WebForm DataGrids and GridViews by default do not generate well-formatted table HTML. You can look at my Adding proper THEAD sections to a GridView post for more info on how to get a GridView to render properly. The plug-in has no dependencies other than jQuery. Even with the limitations in mind I hope this might be useful to some of you. I know I've already identified a number of places in my own existing applications where I will be plugging this in almost immediately. Resources Download Sample and Plug-in code Latest version in the West Wind Web & AJAX Toolkit Repository © Rick Strahl, West Wind Technologies, 2005-2011Posted in jQuery  HTML  ASP.NET  

    Read the article

  • Django doesn't refresh my request object when reloading the current page.

    - by Boris Rusev
    I have a Django web site which I want ot be viewable in different languages. Until this morning everything was working fine. Here is the deal. I go to my say About Us page and it is in English. Below it there is the change language button and when I press it everything "magically" translates to Bulgarian just the way I want it. On the other hand I have a JS menu from which the user is able to browse through the products. I click on 'T-Shirt' then a sub-menu opens bellow the previously pressed containing different categories - Men, Women, Children. The link guides me to a page where the exact clothes I have requested are listed. BUT... When I try to change the language THEN, nothing happens. I go to the Abouts Page, change the language from there, return to the clothes catalog and the language is changed... I will no paste some code. This is my change button code: function changeLanguage() { if (getCookie('language') == 'EN') { setCookie("language", 'BG'); } else { setCookie("language", 'EN'); } window.location.reload(); } These are my URL patterns: urlpatterns = patterns('', # Example: # (r'^enter_clothing/', include('enter_clothing.foo.urls')), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: # (r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: (r'^site_media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': '/home/boris/Projects/enter_clothing/templates/media', 'show_indexes': True}), (r'^$', 'enter_clothing.clothes_app.views.index'), (r'^home', 'enter_clothing.clothes_app.views.home'), (r'^products', 'enter_clothing.clothes_app.views.products'), (r'^orders', 'enter_clothing.clothes_app.views.orders'), (r'^aboutUs', 'enter_clothing.clothes_app.views.aboutUs'), (r'^contactUs', 'enter_clothing.clothes_app.views.contactUs'), (r'^admin/', include(admin.site.urls)), (r'^(\w+)/(\w+)/page=(\d+)', 'enter_clothing.clothes_app.views.displayClothes'), ) My About Us page: @base def aboutUs(request): return """<b>%s</b>""" % getTranslation("About Us Text", request.COOKIES['language']) The @base method: def base(myfunc): def inner_func(*args, **kwargs): try: args[0].COOKIES['language'] except: args[0].COOKIES['language'] = 'BG' resetGlobalVariables() initCollections(args[0]) categoriesByCollection = dict((collection, getCategoriesFromCollection(args[0], collection)) for collection in collections) if args[0].COOKIES['language'] == 'BG': for k, v in categoriesByCollection.iteritems(): categoriesByCollection[k] = reduce(lambda a,b: a+b, map(lambda x: """<li><a href="/%s/%s/page=1">%s</a></li>""" % (translateCategory(args[0], x), translateCollection(args[0], k), str(x)), v), "") else: for k, v in categoriesByCollection.iteritems(): categoriesByCollection[k] = reduce(lambda a,b: a+b, map(lambda x: """<li><a href="/%s/%s/page=1">%s</a></li>""" % (str(x), str(k), str(x)), v), "") contents = myfunc(*args, **kwargs) return render_to_response('index.html', {'title': title, 'categoriesByCollection': categoriesByCollection.iteritems(), 'keys': enumerate(keys), 'values': enumerate(values), 'contents': contents, 'btnHome':getTranslation("Home Button", args[0].COOKIES['language']), 'btnProducts':getTranslation("Products Button", args[0].COOKIES['language']), 'btnOrders':getTranslation("Orders Button", args[0].COOKIES['language']), 'btnAboutUs':getTranslation("About Us Button", args[0].COOKIES['language']), 'btnContacts':getTranslation("Contact Us Button", args[0].COOKIES['language']), 'btnChangeLanguage':getTranslation("Button Change Language", args[0].COOKIES['language'])}) return inner_func And the catalog page: @base def displayClothes(request, category, collection, page): clothesToDisplay = getClothesFromCollectionAndCategory(request, category, collection) contents = "" pageCount = len(clothesToDisplay) / ( rowCount * columnCount) + 1 matrixSize = rowCount * columnCount currentPage = str(page).replace("page=", "") currentPage = int(currentPage) - 1 #raise Exception(request) # this is for the clothes layout for x in range(currentPage * matrixSize, matrixSize * (currentPage + 1)): if x < len(clothesToDisplay): if request.COOKIES['language'] == 'EN': contents += """<div class="clothes">%s</div>""" % clothesToDisplay[x].getEnglishHTML() else: contents += """<div class="clothes">%s</div>""" % clothesToDisplay[x].getBulgarianHTML() if (x + 1) % columnCount == 0: contents += """<div class="clear"></div>""" contents += """<div class="clear"></div>""" # this is for the page links if pageCount > 1: for x in range(0, pageCount): if x == currentPage: contents += """<a href="/%s/%s/page=%s"><span style="font-size: 20pt; color: black;">%s</span></a>""" % (category, collection, x + 1, x + 1) else: contents += """<a href="/%s/%s/page=%s"><span style="font-size: 20pt; color: blue;">%s</span></a>""" % (category, collection, x + 1, x + 1) return """%s""" % (contents) Let me explain that you needn't be alarmed by the large quantities of code I have posted. You don't have to understand it or even look at all of it. I've published it just in case because I really can't understand the origins of the bug. Now this is how I have narrowed the problem. I am debuging with "raise Exception(request)" every time I want to know what's inside my request object. When I place this in my aboutUs method, the language cookie value changes every time I press the language button. But NOT when I am in the displayClothes method. There the language stays the same. Also I tried putting the exception line in the beginning of the @base method. It turns out the situation there is exactly the same. When I am in my About Us page and click on the button, the language in my request object changes, but when I press the button while in the catalog page it remains unchanged. That is all I could find, and I have no idea as to how Django distinguishes my pages and in what way. P.S. The JavaScript I think works perfectly, I have tested it in multiple ways. Thank you, I hope some of you will read this enormous post, and don't hesitate to ask for more code excerpts.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • The Product Owner

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. Picking a most important part of Scrum is difficult.  All of the rules are required, but if there were one rule that is “more” required that every other rule, its having a good Product Owner.  Simply put, the Product Owner can make or break the project. Duties of the Product Owner A Product Owner has many duties and responsibilities.  I’ll talk about each of these duties in detail below. A Product Owner: Discovers and records stories for the backlog. Prioritizes stories in the Product Backlog, Release Backlog and Iteration Backlog. Determines Release dates and Iteration Dates. Develops story details and helps the team understand those details. Helps QA to develop acceptance tests. Interact with the Customer to make sure that the product is meeting the customer’s needs. Discovers and Records Stories for the Backlog When I do Scrum, I always use User Stories as the means for capturing functionality that’s required in the system.  Some people will use Use Cases, but the same rule applies.  The Product Owner has the ultimate responsibility for figuring out what functionality will be in the system.  Many different mechanisms for capturing this input can be used.  User interviews are great, but all sources should be considered, including talking with Customer Support types.  Often, they hear what users are struggling with the most and are a great source for stories that can make the application easier to use. Care should be taken when soliciting user stories from technical types such as programmers and the people that manage them.  They will almost always give stories that are very technical in nature and may not have a direct benefit for the end user.  Stories are about adding value to the company.  If the stories don’t have direct benefit to the end user, the Product Owner should question whether or not the story should be implemented.  In general, technical stories should be included as tasks in User Stories.  Technical stories are often needed, but the ultimate value to the user is in user based functionality, so technical stories should be considered nothing more than overhead in providing that user functionality. Until the iteration prior to development, stories should be nothing more than short, one line placeholders. An exercise called Story Planning can be used to brainstorm and come up with stories.  I’ll save the description of this activity for another blog post. For more information on User Stories, please read the book User Stories Applied by Mike Cohn. Prioritizes Stories in the Product Backlog, Release Backlog and Iteration Backlog Prioritization of stories is one of the most difficult tasks that a Product Owner must do.  A key concept of Scrum done right is the need to have the team working from a single set of prioritized stories.  If the team does not have a single set of prioritized stories, Scrum will likely fail at your organization.  The Product Owner is the ONLY person who has the responsibility to prioritize that list.  The Product Owner must be very diplomatic and sincerely listen to the people around him so that he can get the priorities correct. Just listening will still not yield the proper priorities.  Care must also be taken to ensure that Return on Investment is also considered.  Ultimately, determining which stories give the most value to the company for the least cost is the most important factor in determining priorities.  Product Owners should be willing to look at cold, hard numbers to determine the order for stories.  Even when many people want a feature, if that features is costly to develop, it may not have as high of a return on investment as features that are cheaper, but not as popular. The act of prioritization often causes conflict in an environment.  Customer Service thinks that feature X is the most important, because it will stop people from calling.  Operations thinks that feature Y is the most important, because it will stop servers from crashing.  Developers think that feature Z is most important because it will make writing software much easier for them.  All of these are useful goals, but the team can have only one list of items, and each item must have a priority that is different from all other stories.  The Product Owner will determine which feature gives the best return on investment and the other features will have to wait their turn, which means that someone will not have their top priority feature implemented first. A weak Product Owner will refuse to do prioritization.  I’ve heard from multiple Product Owners the following phrase, “Well, it’s all got to be done, so what does it matter what order we do it in?”  If your product owner is using this phrase, you need a new Product Owner.  Order is VERY important.  In Scrum, every release is potentially shippable.  If the wrong priority items are developed, then the value added in each release isn’t what it should be.  Additionally, the Product Owner with this mindset doesn’t understand Agile.  A product is NEVER finished, until the company has decided that it is no longer a going concern and they are no longer going to sell the product.  Therefore, prioritization isn’t an event, its something that continues every day.  The logical extension of the phrase “It’s all got to be done” is that you will never ship your product, since a product is never “done.”  Once stories have been prioritized, assigning them to the Release Backlog and the Iteration Backlog becomes relatively simple.  The top priority items are copied into the respective backlogs in order and the task is complete.  The team does have the right to shuffle things around a little in the iteration backlog.  For example, they may determine that working on story C with story A is appropriate because they’re related, even though story B is technically a higher priority than story C.  Or they may decide that story B is too big to complete in the time available after Story A has tasks created, so they’ll work on Story C since it’s smaller.  They can’t, however, go deep into the backlog to pick stories to implement.  The team and the Product Owner should work together to determine what’s best for the company. Prioritization is time consuming, but its one of the most important things a Product Owner does. Determines Release Dates and Iteration Dates Product owners are responsible for determining release dates for a product.  A common misconception that Product Owners have is that every “release” needs to correspond with an actual release to customers.  This is not the case.  In general, releases should be no more than 3 months long.  You  may decide to release the product to the customers, and many companies do release the product to customers, but it may also be an internal release. If a release date is too far away, developers will fall into the trap of not feeling a sense of urgency.  The date is far enough away that they don’t need to give the release their full attention.  Additionally, important tasks, such as performance tuning, regression testing, user documentation, and release preparation, will not happen regularly, making them much more difficult and time consuming to do.  The more frequently you do these tasks, the easier they are to accomplish. The Product Owner will be a key participant in determining whether or not a release should be sent out to the customers.  The determination should be made on whether or not the features contained in the release are valuable enough  and complete enough that the customers will see real value in the release.  Often, some features will take more than three months to get them to a state where they qualify for a release or need additional supporting features to be released.  The product owner has the right to make this determination. In addition to release dates, the Product Owner also will help determine iteration dates.  In general, an iteration length should be chosen and the team should follow that iteration length for an extended period of time.  If the iteration length is changed every iteration, you’re not doing Scrum.  Iteration lengths help the team and company get into a rhythm of developing quality software.  Iterations should be somewhere between 2 and 4 weeks in length.  Any shorter, and significant software will likely not be developed.  Any longer, and the team won’t feel urgency and planning will become very difficult. Iterations may not be extended during the iteration.  Companies where Scrum isn’t really followed will often use this as a strategy to complete all stories.  They don’t want to face the harsh reality of what their true performance is, and looking good is more important than seeking visibility and improving the process and team.  Companies like this typically don’t allow failure.  This is unhealthy.  Failure is part of life and unless we learn from it, we can’t improve.  I would much rather see a team push out stories to the next iteration and then have healthy discussions about why they failed rather than extend the iteration and not deal with the core problems. If iteration length varies, retrospectives become more difficult.  For example, evaluating the performance of the team’s estimation efforts becomes much more difficult if the iteration length varies.  Also, the team must have a velocity measurement.  If the iteration length varies, measuring velocity becomes impossible and upper management no longer will have the ability to evaluate the teams performance.  People external to the team will no longer have the ability to determine when key features are likely to be developed.  Variable iterations cause the entire company to fail and likely cause Scrum to fail at an organization. Develops Story Details and Helps the Team Understand Those Details A key concept in Scrum is that the stories are nothing more than a placeholder for a conversation.  Stories should be nothing more than short, one line statements about the functionality.  The team will then converse with the Product Owner about the details about that story.  The product owner needs to have a very good idea about what the details of the story are and needs to be able to help the team understand those details. Too often, we see this requirement as being translated into the need for comprehensive documentation about the story, including old fashioned requirements documentation.  The team should only develop the documentation that is required and should not develop documentation that is only created because their is a process to do so. In general, what we see that works best is the iteration before a team starts development work on a story, the Product Owner, with other appropriate business analysts, will develop the details of that story.  They’ll figure out what business rules are required, potentially make paper prototypes or other light weight mock-ups, and they seek to understand the story and what is implied.  Note that the time allowed for this task is deliberately short.  The Product Owner only has a single iteration to develop all of the stories for the next iteration. If more than one iteration is used, I’ve found that teams will end up with Big Design Up Front and traditional requirements documents.  This is a waste of time, since the team will need to then have discussions with the Product Owner to figure out what the requirements document says.  Instead of this, skip making the pretty pictures and detailing the nuances of the requirements and build only what is minimally needed by the team to do development.  If something comes up during development, you can address it at that time and figure out what you want to do.  The goal is to keep things as light weight as possible so that everyone can move as quickly as possible. Helps QA to Develop Acceptance Tests In Scrum, no story can be counted until it is accepted by QA.  Because of this, acceptance tests are very important to the team.  In general, acceptance tests need to be developed prior to the iteration or at the very beginning of the iteration so that the team can make sure that the tasks that they develop will fulfill the acceptance criteria. The Product Owner will help the team, including QA, understand what will make the story acceptable.  Note that the Product Owner needs to be careful about specifying that the feature will work “Perfectly” at the end of the iteration.  In general, features are developed a little bit at a time, so only the bit that is being developed should be considered as necessary for acceptance. A weak Product Owner will make statements like “Do it right the first time.”  Not only are these statements damaging to the team (like they would try to do it WRONG the first time . . .), they’re also ignoring the iterative nature of Scrum.  Additionally, a weak product owner will seek to add scope in the acceptance testing.  For example, they will refuse to determine acceptance at the beginning of the iteration, and then, after the team has planned and committed to the iteration, they will expand scope by defining acceptance.  This often causes the team to miss the iteration because scope that wasn’t planned on is included.  There are ways that the team can mitigate this problem.  For example, include extra “Product Owner” time to deal with the uncertainty that you know will be introduced by the Product Owner.  This will slow the perceived velocity of the team and is not ideal, since they’ll be doing more work than they get credit for. Interact with the Customer to Make Sure that the Product is Meeting the Customer’s Needs Once development is complete, what the team has worked on should be put in front of real live people to see if it meets the needs of the customer.  One of the great things about Agile is that if something doesn’t work, we can revisit it in a future iteration!  This frees up the team to make the best decision now and know that if that decision proves to be incorrect, the team can revisit it and change that decision. Features are about adding value to the customer, so if the customer doesn’t find them useful, then having the team make tweaks is valuable.  In general, most software will be 80 to 90 percent “right” after the initial round and only minor tweaks are required.  If proper coding standards are followed, these tweaks are usually minor and easy to accomplish.  Product Owners that are doing a good job will encourage real users to see and use the software, since they know that they are trying to add value to the customer. Poor product owners will think that they know the answers already, that their customers are silly and do stupid things and that they don’t need customer input.  If you have a product owner that is afraid to show the team’s work to real customers, you probably need a different product owner. Up Next, “Who Makes a Good Product Owner.” Followed by, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • Strange exception when connecting to a WCF service via a proxy server

    - by Slauma
    The exception "This operation is not supported for a relative URI." occurs in the following situation: I have a WCF service: [ServiceContract(ProtectionLevel=ProtectionLevel.None)] public interface IMyService { [OperationContract] [FaultContract(typeof(MyFault))] List<MyDto> MyOperation(int param); // other operations } public class MyService : IMyService { public List<MyDto> MyOperation(int param) { // Do the business stuff and return a list of MyDto } // other implementations } MyFault and MyDto are two very simple classes marked with [DataContract] attribute and each only having three [DataMember] of type string, int and int?. This service is hosted in IIS 7.0 on a Win 2008 Server along with an ASP.NET application. I am using an SVC file MyService.svc which is located directly in the root of the web site. The service configuration in web.config is the following: <system.serviceModel> <services> <service name="MyServiceLib.MyService"> <endpoint address="" binding="wsHttpBinding" bindingConfiguration="wsHttpBindingConfig" contract="MyServiceLib.IMyService" /> </service> </services> <bindings> <wsHttpBinding> <binding name="wsHttpBindingConfig"> <security mode="None"> <transport clientCredentialType="None" /> </security> </binding> </wsHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="false"/> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> This seems to work so far as I can enter the address http://www.domain.com/MyService.svc in a browser and get the "This is a Windows Communication Foundation Service"-Welcome page. One of the clients consuming the service is a console application: MyServiceClient aChannel = new MyServiceClient("WSHttpBinding_IMyService"); List<MyDto> aMyDtoList = aChannel.MyOperation(1); It has the following configuration: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IMyService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="true" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="false" proxyAddress="10.20.30.40:8080" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="None"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://www.domain.com/MyService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMyService" contract="MyService.IMyService" name="WSHttpBinding_IMyService" /> </client> </system.serviceModel> When I run this application at a production server at a customer site calling aChannel.MyOperation(1) throws the following exception: This operation is not supported for a relative URI. When I run this client application on my development PC with exactly the same config, with the exception that I remove proxyAddress="10.20.30.40:8080" from the bindings the operation works without problems. Now I really don't know what specifying a proxy server address might have to do with absolute or relative URIs. The use of the proxy server or not is the only difference I can see when running the client on the production or on the development machine. Does someone have an idea what this exception might mean in this context and how possibly to solve the problem? Thank you in advance for help!

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • std::basic_stringstream<unsigned char> won't compile with MSVC 10

    - by Michael J
    I'm trying to get UTF-8 chars to co-exist with ANSI 8-bit chars. My strategy has been to represent utf-8 chars as unsigned char so that appropriate overloads of functions can be used for the two character types. e.g. namespace MyStuff { typedef uchar utf8_t; typedef std::basic_string<utf8_t> U8string; } void SomeFunc(std::string &s); void SomeFunc(std::wstring &s); void SomeFunc(MyStuff::U8string &s); This all works pretty well until I try to use a stringstream. std::basic_ostringstream<MyStuff::utf8_t> ostr; ostr << 1; MSVC Visual C++ Express V10 won't compile this: c:\program files\microsoft visual studio 10.0\vc\include\xlocmon(213): warning C4273: 'id' : inconsistent dll linkage c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(65) : see previous definition of 'public: static std::locale::id std::numpunct<unsigned char>::id' c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(65) : while compiling class template static data member 'std::locale::id std::numpunct<_Elem>::id' with [ _Elem=Tk::utf8_t ] c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(1149) : see reference to function template instantiation 'const _Facet &std::use_facet<std::numpunct<_Elem>>(const std::locale &)' being compiled with [ _Facet=std::numpunct<Tk::utf8_t>, _Elem=Tk::utf8_t ] c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(1143) : while compiling class template member function 'std::ostreambuf_iterator<_Elem,_Traits> std::num_put<_Elem,_OutIt>:: do_put(_OutIt,std::ios_base &,_Elem,std::_Bool) const' with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t>, _OutIt=std::ostreambuf_iterator<Tk::utf8_t,std::char_traits<Tk::utf8_t>> ] c:\program files\microsoft visual studio 10.0\vc\include\ostream(295) : see reference to class template instantiation 'std::num_put<_Elem,_OutIt>' being compiled with [ _Elem=Tk::utf8_t, _OutIt=std::ostreambuf_iterator<Tk::utf8_t,std::char_traits<Tk::utf8_t>> ] c:\program files\microsoft visual studio 10.0\vc\include\ostream(281) : while compiling class template member function 'std::basic_ostream<_Elem,_Traits> & std::basic_ostream<_Elem,_Traits>::operator <<(int)' with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t> ] c:\program files\microsoft visual studio 10.0\vc\include\sstream(526) : see reference to class template instantiation 'std::basic_ostream<_Elem,_Traits>' being compiled with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t> ] c:\users\michael\dvl\tmp\console\console.cpp(23) : see reference to class template instantiation 'std::basic_ostringstream<_Elem,_Traits,_Alloc>' being compiled with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t>, _Alloc=std::allocator<uchar> ] . c:\program files\microsoft visual studio 10.0\vc\include\xlocmon(213): error C2491: 'std::numpunct<_Elem>::id' : definition of dllimport static data member not allowed with [ _Elem=Tk::utf8_t ] Any ideas? ** Edited 19 June 2012 ** OK, I've gotten closer to understanding this, but not how to solve it. As we all know, static class variables get defined twice: once in the class definition and once outside the class definition which establishes storage space. e.g. // in .h file class CFoo { // ... static int x; }; // in .cpp file int CFoo::x = 42; Now in the VC10 headers we get something like this: template<class _Elem> class numpunct : public locale::facet { // ... _CRTIMP2_PURE static locale::id id; // ... } When the header is included in an application, _CRTIMP2_PURE is defined as __declspec(dllimport), which means that the variable is imported from a dll. Now the header also contains the following template<class _Elem> locale::id numpunct<_Elem>::id; Note the absence of the __declspec(dllimport) qualifier. i.e. The class declaration says that the static linkage of the id variable is in the dll, but for the general case, it gets declared outside the dll. For the known cases, there are specialisations. template locale::id numpunct<char>::id; template locale::id numpunct<wchar_t>::id; These are protected by #ifs so that they are only included when building the DLL. They are excluded otherwise. i.e. the char and wchar_t versions of numpunct ARE inside the dll So we have the class definition saying that id's storage is in the DLL, but that is only true for the char and wchar_t specialisations, meaning that my unsigned char version is doomed. :-( The only way forward that I can think of is to create my own specialisation: basically copying it from the header file and fixing it. This raises many issues. Anybody have a better idea?

    Read the article

  • threaded serial port IOException when writing

    - by John McDonald
    Hi, I'm trying to write a small application that simply reads data from a socket, extracts some information (two integers) from the data and sends the extracted information off on a serial port. The idea is that it should start and just keep going. In short, it works, but not for long. After a consistently short period I start to receive IOExceptions and socket receive buffer is swamped. The thread framework has been taken from the MSDN serial port example. The delay in send(), readThread.Join(), is an effort to delay read() in order to allow serial port interrupt processing a chance to occur, but I think I've misinterpreted the join function. I either need to sync the processes more effectively or throw some data away as it comes in off the socket, which would be fine. The integer data is controlling a pan tilt unit and I'm sure four times a second would be acceptable, but not sure on how to best acheive either, any ideas would be greatly appreciated, cheers. using System; using System.Collections.Generic; using System.Text; using System.IO.Ports; using System.Threading; using System.Net; using System.Net.Sockets; using System.IO; namespace ConsoleApplication1 { class Program { static bool _continue; static SerialPort _serialPort; static Thread readThread; static Thread sendThread; static String sendString; static Socket s; static int byteCount; static Byte[] bytesReceived; // synchronise send and receive threads static bool dataReceived; const int FIONREAD = 0x4004667F; static void Main(string[] args) { dataReceived = false; readThread = new Thread(Read); sendThread = new Thread(Send); bytesReceived = new Byte[16384]; // Create a new SerialPort object with default settings. _serialPort = new SerialPort("COM4", 38400, Parity.None, 8, StopBits.One); // Set the read/write timeouts _serialPort.WriteTimeout = 500; _serialPort.Open(); string moveMode = "CV "; _serialPort.WriteLine(moveMode); s = null; IPHostEntry hostEntry = Dns.GetHostEntry("localhost"); foreach (IPAddress address in hostEntry.AddressList) { IPEndPoint ipe = new IPEndPoint(address, 10001); Socket tempSocket = new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp); tempSocket.Connect(ipe); if (tempSocket.Connected) { s = tempSocket; s.ReceiveBufferSize = 16384; break; } else { continue; } } readThread.Start(); sendThread.Start(); while (_continue) { Thread.Sleep(10); ;// Console.WriteLine("main..."); } readThread.Join(); _serialPort.Close(); s.Close(); } public static void Read() { while (_continue) { try { //Console.WriteLine("Read"); if (!dataReceived) { byte[] outValue = BitConverter.GetBytes(0); // Check how many bytes have been received. s.IOControl(FIONREAD, null, outValue); uint bytesAvailable = BitConverter.ToUInt32(outValue, 0); if (bytesAvailable > 0) { Console.WriteLine("Read thread..." + bytesAvailable); byteCount = s.Receive(bytesReceived); string str = Encoding.ASCII.GetString(bytesReceived); //str = Encoding::UTF8->GetString( bytesReceived ); string[] split = str.Split(new Char[] { '\t', '\r', '\n' }); string filteredX = (split.GetValue(7)).ToString(); string filteredY = (split.GetValue(8)).ToString(); string[] AzSplit = filteredX.Split(new Char[] { '.' }); filteredX = (AzSplit.GetValue(0)).ToString(); string[] ElSplit = filteredY.Split(new Char[] { '.' }); filteredY = (ElSplit.GetValue(0)).ToString(); // scale values int x = (int)(Convert.ToInt32(filteredX) * 1.9); string scaledAz = x.ToString(); int y = (int)(Convert.ToInt32(filteredY) * 1.9); string scaledEl = y.ToString(); String moveAz = "PS" + scaledAz + " "; String moveEl = "TS" + scaledEl + " "; sendString = moveAz + moveEl; dataReceived = true; } } } catch (TimeoutException) {Console.WriteLine("timeout exception");} catch (NullReferenceException) {Console.WriteLine("Read NULL reference exception");} } } public static void Send() { while (_continue) { try { if (dataReceived) { // sleep Read() thread to allow serial port interrupt processing readThread.Join(100); // send command to PTU dataReceived = false; Console.WriteLine(sendString); _serialPort.WriteLine(sendString); } } catch (TimeoutException) { Console.WriteLine("Timeout exception"); } catch (IOException) { Console.WriteLine("IOException exception"); } catch (NullReferenceException) { Console.WriteLine("Send NULL reference exception"); } } } } }

    Read the article

  • You should NOT be writing jQuery in SharePoint if&hellip;

    - by Mark Rackley
    Yes… another one of these posts. What can I say? I’m a pot stirrer.. a rabble rouser *rabble rabble* jQuery in SharePoint seems to be a fairly polarizing issue with one side thinking it is the most awesome thing since Princess Leia as the slave girl in Return of the Jedi and the other half thinking it is the worst idea since Mannequin 2: On the Move. The correct answer is OF COURSE “it depends”. But what are those deciding factors that make jQuery an awesome fit or leave a bad taste in your mouth? Let’s see if I can drive the discussion here with some polarizing comments of my own… I know some of you are getting ready to leave your comments even now before reading the rest of the blog, which is great! Iron sharpens iron… These discussions hopefully open us up to understanding the entire process better and think about things in a different way. You should not be writing jQuery in SharePoint if you are not a developer… Let’s start off with my most polarizing and rant filled portion of the blog post. If you don’t know what you are doing or you don’t have a background that helps you understand the implications of what you are writing then you should not be writing jQuery in SharePoint! I truly believe that one of the biggest reasons for the jQuery haters is because of all the bad jQuery out there. If you don’t know what you are doing you can do some NASTY things! One of the best stories I’ve heard about this is from my good friend John Ferringer (@ferringer). John tells this story during our Mythbusters session we do together. One of his clients was undergoing a Denial of Service attack and they couldn’t figure out what was going on! After much searching they found that some genius jQuery developer wrote some code for an image rotator, but did not take into account what happens when there are no images to load! The code just kept hitting the servers over and over and over again which prevented anything else from getting done! Now, I’m NOT saying that I have not done the same sort of thing in the past or am immune from such mistakes. My point is that if you don’t know what you are doing, there are very REAL consequences that can have a major impact on your organization AND they will be hard to track down.  Think how happy your boss will be after you copy and pasted some jQuery from a blog without understanding what it does, it brings down the farm, AND it takes them 3 days to track it back to you.  :/ Good times will not be had. Like it or not JavaScript/jQuery is a programming language. While you .NET people sit on your high horses because your code is compiled and “runs faster” (also debatable), the rest of us will be actually getting work done and delivering solutions while you are trying to figure out why your widget won’t deploy. I can pick at that scab because I write .NET code too and speak from experience. I can do both, and do both well. So, I am not speaking from ignorance here. In JavaScript/jQuery you have variables, loops, conditionals, functions, arrays, events, and built in methods. If you are not a developer you just aren’t going to take advantage of all of that and use it correctly. Ahhh.. but there is hope! There is a lot of jQuery resources out there to help you learn and learn well! There are many experts on the subject that will gladly tell you when you are smoking crack. I just this minute saw a tweet from @cquick with a link to: “jQuery Fundamentals”. I just glanced through it and this may be a great primer for you aspiring jQuery devs. Take advantage of all the resources and become a developer! Hey, it will look awesome on your resume right? You should not be writing jQuery in SharePoint if it depends too much on client resources for a good user experience I’ve said it once and I’ll say it over and over until you understand. jQuery is executed on the client’s computer. Got it? If you are looping through hundreds of rows of data, searching through an enormous DOM, or performing many calculations it is going to take some time! AND if your user happens to be sitting on some old PC somewhere that they picked up at a garage sale their experience will be that much worse! If you can’t give the user a good experience they will not use the site. So, if jQuery is causing the user to have a bad experience, don’t use it. I sometimes go as far to say that you should NOT go to jQuery as a first option for external facing web sites because you have ZERO control over what the end user’s computer will be. You just can’t guarantee an awesome user experience all of the time. Ahhh… but you have no choice? (where have I heard that before?). Well… if you really have no choice, here are some tips to help improve the experience: Avoid screen scraping This is not 1999 and SharePoint is not an old green screen from a mainframe… so why are you treating it like it is? Screen scraping is time consuming and client intensive. Take advantage of tools like SPServices to do your data retrieval when possible. Fine tune your DOM searches A lot of time can be eaten up just searching the DOM and ignoring table rows that you don’t need. Write better jQuery to only loop through tables rows that you need, or only access specific elements you need. Take advantage of Element ID’s to return the one element you are looking for instead of looping through all the DOM over and over again. Write better jQuery Remember this is development. Think about how you can write cleaner, faster jQuery. This directly relates to the previous point of improving your DOM searches, but also when using arrays, variables and loops. Do you REALLY need to loop through that array 3 times? How can you knock it down to 2 times or even 1? When you have lots of calculations and data that you are manipulating every operation adds up. Think about how you can streamline it. Back in the old days before RAM was abundant, Cores were plentiful and dinosaurs roamed the earth, us developers had to take performance into account in everything we did. It’s a lost art that really needs to be used here. You should not be writing jQuery in SharePoint if you are sending a lot of data over the wire… Developer:  “Awesome… you can easily call SharePoint’s web services to retrieve and write data using SPServices!” Administrator: “Crap! you can easily call SharePoint’s web services to retrieve and write data using SPServices!” SPServices may indeed be the best thing that happened to SharePoint since the invention of SharePoint Saturdays by Godfather Lotter… BUT you HAVE to use it wisely! (I REFUSE to make the Spiderman reference). If you do not know what you are doing your code will bring back EVERY field and EVERY row from a list and push that over the internet with all that lovely XML wrapped around it. That can be a HUGE amount of data and will GREATLY impact performance! Calling several web service methods at the same time can cause the same problem and can negatively impact your SharePoint servers. These problems, thankfully, are not difficult to rectify if you are careful: Limit list data retrieved Use CAML to reduce the number of rows returned and limit the fields returned using ViewFields.  You should definitely be doing this regardless. If you aren’t I hope your admin thumps you upside the head. Batch large list updates You may or may not have noticed that if you try to do large updates (hundreds of rows) that the performance is either completely abysmal or it fails over half the time. You can greatly improve performance and avoid timeouts by breaking up your updates into several smaller updates. I don’t know if there is a magic number for best performance, it really depends on how much data you are sending back more than the number of rows. However, I have found that 200 rows generally works well.  Play around and find the right number for your situation. Delay Web Service calls when possible One of the cool things about jQuery and SPServices is that you can delay queries to the server until they are actually needed instead of doing them all at once. This can lead to performance improvements over DataViewWebParts and even .NET code in the right situations. So, don’t load the data until it’s needed. In some instances you may not need to retrieve the data at all, so why retrieve it ALL the time? You should not be writing jQuery in SharePoint if there is a better solution… jQuery is NOT the silver bullet in SharePoint, it is not the answer to every question, it is just another tool in the developers toolkit. I urge all developers to know what options exist out there and choose the right one! Sometimes it will be jQuery, sometimes it will be .NET,  sometimes it will be XSL, and sometimes it will be some other choice… So, when is there a better solution to jQuery? When you can’t get away from performance problems Sometimes jQuery will just give you horrible performance regardless of what you do because of unavoidable obstacles. In these situations you are going to have to figure out an alternative. Can I do it with a DVWP or do I have to crack open Visual Studio? When you need to do something that jQuery can’t do There are lots of things you can’t do in jQuery like elevate privileges, event handlers, workflows, or interact with back end systems that have no web service interface. It just can’t do everything. When it can be done faster and more efficiently another way Why are you spending time to write jQuery to do a DataViewWebPart that would take 5 minutes? Or why are you trying to implement complicated logic that would be simple to do in .NET? If your answer is that you don’t have the option, okay. BUT if you do have the option don’t reinvent the wheel! Take advantage of the other tools. The answer is not always jQuery… sorry… the kool-aid tastes good, but sweet tea is pretty awesome too. You should not be using jQuery in SharePoint if you are a moron… Let’s finish up the blog on a high note… Yes.. it’s true, I sometimes type things just to get a reaction… guess this section title might be a good example, but it feels good sometimes just to type the words that a lot of us think… So.. don’t be that guy! Another good buddy of mine that works for Microsoft told me. “I loved jQuery in SharePoint…. until I had to support it.”. He went on to explain that some user was making several web service calls on a page using jQuery and then was calling Microsoft and COMPLAINING because the page took so long to load… DUH! What do you expect to happen when you are pushing that much data over the wire and are making that many web service calls at once!! It’s one thing to write that kind of code and accept it’s just going to take a while, it’s COMPLETELY another issue to do that and then complain when it’s not lightning fast!  Someone’s gene pool needs some chlorine. So, I think this is a nice summary of the blog… DON’T be that guy… don’t be a moron. How can you stop yourself from being a moron? Ah.. glad you asked, here are some tips: Think Is jQuery the right solution to my problem? Is there a better approach? What are the implications and pitfalls of using jQuery in this situation? Search What are others doing? Does someone have a better solution? Is there a third party library that does the same thing I need? Plan Write good jQuery. Limit calculations and data sent over the wire and don’t reinvent the wheel when possible. Test Okay, it works well on your machine. Try it on others ESPECIALLY if this is for an external site. Test with empty data. Test with hundreds of rows of data. Test as many scenarios as possible. Monitor those server resources to see the impact there as well. Ask the experts As smart as you are, there are people smarter than you. Even the experts talk to each other to make sure they aren't doing something stupid. And for the MOST part they are pretty nice guys. Marc Anderson and Christophe Humbert are two guys who regularly keep me in line. Make sure you aren’t doing something stupid. Repeat So, when you think you have the best solution possible, repeat the steps above just to be safe.  Conclusion jQuery is an awesome tool and has come in handy on many occasions. I’m even teaching a 1/2 day SharePoint & jQuery workshop at the upcoming SPTechCon in Boston if you want to berate me in person. However, it’s only as awesome as the developer behind the keyboard. It IS development and has its pitfalls. Knowledge and experience are invaluable to giving the user the best experience possible.  Let’s face it, in the end, no matter our opinions, prejudices, or ego providing our clients, customers, and users with the best solution possible is what counts. Period… end of sentence…

    Read the article

  • How do I setup ASP.NET MVC 2 with MySQL?

    - by NovaJoe
    Okay, so I'm cheating and not actually a question, but instead making a flat-out post. I know that goes against the grain of Stack Overflow, but this is too valuable not to share. I'm assuming that you have Visual Studio Professional 2008 and access to an instance of MySQL server. This MAY work with VS2008 Web edition, but not at all sure. If you haven't, install MySQL Connector for .NET (6.2.2.0 at the time of this write-up) Optional: install MySQL GUI Tools If you haven't, install MVC 2 RTM, or better yet, use Microsoft's Web Platform Installer. Create an empty MySQL database. If you don't want to access your application with the MySQL root user account (insecure), create a user account and assign the appropriate privileges (outside the scope of this write-up). Create a new MVC 2 application in Visual Studio In the MVC 2 app, reference MySql.Web.dll. It will either be in your GAC, or in the folder that the MySQL Connector installer put it. Modify the connection strings portion of your web.config: <connectionStrings> <remove name="LocalMySqlServer"/> <add name="MySqlMembershipConnection" connectionString="Data Source=[MySql server host name];user id=[user];password=[password];database=[database name];" providerName="MySql.Data.MySqlClient"/> </connectionStrings> Modify the membership portion of your web.config: <membership defaultProvider="MySqlMembershipProvider"> <providers> <clear/> <add name="MySqlMembershipProvider" type="MySql.Web.Security.MySQLMembershipProvider, MySql.Web, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" connectionStringName="MySqlMembershipConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" autogenerateschema="true"/> </providers> </membership> Modify the role manager portion of your web.config: <roleManager enabled="true" defaultProvider="MySqlRoleProvider"> <providers> <clear /> <add connectionStringName="MySqlMembershipConnection" applicationName="/" name="MySqlRoleProvider" type="MySql.Web.Security.MySQLRoleProvider, MySql.Web, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" autogenerateschema="true"/> </providers> </roleManager> Modify the profile portion of your web.config: <profile> <providers> <clear/> <add type="MySql.Web.Security.MySQLProfileProvider, MySql.Web, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" name="MySqlProfileProvider" applicationName="/" connectionStringName="MySqlMembershipConnection" autogenerateschema="true"/> </providers> </profile> At this point, you ought to be able to run the app and have the default ASP.NET MVC 2 home page come up in your browser. However, it may be a better idea to first run the ASP.NET Web configuration Tool (in Visual Studio top menus: Project - ASP.NET Configuration). Once the tool launches, check out each of the tabs; no errors = all good. The configuration tool Nathan Bridgewater's blog was essential to getting this working. Kudos, Nathan. Look for the "Configuration Tool" heading half way down the page. The public key token on the MySql.web.dll that I've posted here ought not change any time soon. But in case you suspect a bad token string from copying and pasting or whatever, just use the Visual Studio command line to run: "sn -T [Path\to\your.dll]" in order to get the correct public key token. There you have it, ASP.NET MVC 2 running over MySQL. Cheers!

    Read the article

  • Is That REST API Really RPC? Roy Fielding Seems to Think So.

    - by Rich Apodaca
    A large amount of what I thought I knew about REST is apparently wrong - and I'm not alone. This question has a long lead-in, but it seems to be necessary because the information is a bit scattered. The actual question comes at the end if you're already familiar with this topic. From the first paragraph of Roy Fielding's REST APIs must be hypertext-driven, it's pretty clear he believes his work is being widely misinterpreted: I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating. Fielding goes on to list several attributes of a REST API. Some of them seem to go against both common practice and common advice on SO and other forums. For example: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ... A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). ... A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. ... The idea of "hypertext" plays a central role - much more so than URI structure or what HTTP verbs mean. "Hypertext" is defined in one of the comments: When I [Fielding] say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction. Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types. I'm guessing at this point, but the first two points above seem to suggest that API documentation for a Foo resource that looks like the following leads to tight coupling between client and server and has no place in a RESTful system. GET /foos/{id} # read a Foo POST /foos/{id} # create a Foo PUT /foos/{id} # update a Foo Instead, an agent should be forced to discover the URIs for all Foos by, for example, issuing a GET request against /foos. (Those URIs may turn out to follow the pattern above, but that's beside the point.) The response uses a media type that is capable of conveying how to access each item and what can be done with it, giving rise to the third point above. For this reason, API documentation should focus on explaining how to interpret the hypertext contained in the response. Furthermore, every time a URI to a Foo resource is requested, the response contains all of the information needed for an agent to discover how to proceed by, for example, accessing associated and parent resources through their URIs, or by taking action after the creation/deletion of a resource. The key to the entire system is that the response consists of hypertext contained in a media type that itself conveys to the agent options for proceeding. It's not unlike the way a browser works for humans. But this is just my best guess at this particular moment. Fielding posted a follow-up in which he responded to criticism that his discussion was too abstract, lacking in examples, and jargon-rich: Others will try to decipher what I have written in ways that are more direct or applicable to some practical concern of today. I probably won’t, because I am too busy grappling with the next topic, preparing for a conference, writing another standard, traveling to some distant place, or just doing the little things that let me feel I have I earned my paycheck. So, two simple questions for the REST experts out there with a practical mindset: how do you interpret what Fielding is saying and how do you put it into practice when documenting/implementing REST APIs? Edit: this question is an example of how hard it can be to learn something if you don't have a name for what you're talking about. The name in this case is "Hypermedia as the Engine of Application State" (HATEOAS).

    Read the article

  • JAX-WS MarshalException with custom JAX-B bindings: Unable to marshal type "java.lang.String" as an

    - by MoneyMark
    I seem to be having an issue with Jax-WS and Jax-b playing nicely together. I need to consume a web-service, which has a predefined WSDL. When executing the generated client I am receiving the following error: javax.xml.ws.WebServiceException: javax.xml.bind.MarshalException - with linked exception: [com.sun.istack.SAXException2: unable to marshal type "java.lang.String" as an element because it is missing an @XmlRootElement annotation] This started occurring when I used an external custom binding file to map needlessly complex types to java.lang.string. Here is an excerpt from my binding file: <?xml version="1.0" encoding="UTF-8"?> <bindings xmlns="http://java.sun.com/xml/ns/jaxb" version="2.0" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"> <bindings schemaLocation="http://localhost:7777/GESOR/services/RegistryUpdatePort?wsdl#types?schema1" node="/xs:schema"> <bindings node="//xs:element[@name='StwrdCompany']//xs:complexType//xs:sequence//xs:element[@name='company_name']"> <property> <baseType name="java.lang.String" /> </property> </bindings> <bindings node="//xs:element[@name='StwrdCompany']//xs:complexType//xs:sequence//xs:element[@name='address1']"> <property> <baseType name="java.lang.String" /> </property> </bindings> <bindings node="//xs:element[@name='StwrdCompany']//xs:complexType//xs:sequence//xs:element[@name='address2']"> <property> <baseType name="java.lang.String" /> </property> </bindings> ...more fields </bindings> </bindings> When executing wsimport against the provided WSDL, StwrdCompany is generated with the following variables declared: @XmlRootElement(name = "StwrdCompany") public class StwrdCompany { @XmlElementRef(name = "company_name", type = JAXBElement.class) protected String companyName; @XmlElementRef(name = "address1", type = JAXBElement.class) protected String address1; @XmlElementRef(name = "address2", type = JAXBElement.class) ... more fields ... getters/setters @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "value" }) public static class CompanyName { @XmlValue protected String value; @XmlAttribute protected Boolean updateToNULL; /** * Gets the value of the value property. * * @return * possible object is * {@link String } * */ public String getValue() { return value; } /** * Sets the value of the value property. * * @param value * allowed object is * {@link String } * */ public void setValue(String value) { this.value = value; } /** * Gets the value of the updateToNULL property. * * @return * possible object is * {@link Boolean } * */ public boolean isUpdateToNULL() { if (updateToNULL == null) { return false; } else { return updateToNULL; } } /** * Sets the value of the updateToNULL property. * * @param value * allowed object is * {@link Boolean } * */ public void setUpdateToNULL(Boolean value) { this.updateToNULL = value; } ... more inner classes } } Finally, here is the associated snippet from the WSDL that seems to be causing such grief. <xs:element name="StwrdCompany"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="1" minOccurs="0" name="company_name" nillable="true"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute default="false" name="updateToNULL" type="xs:boolean"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> <xs:element maxOccurs="1" minOccurs="0" name="address1" nillable="true"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute default="false" name="updateToNULL" type="xs:boolean"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> ... more fields in the same format <xs:element maxOccurs="1" minOccurs="0" name="p_source_timestamp" nillable="false" type="xs:string"/> </xs:sequence> <xs:attribute name="company_xid" type="xs:string"/> </xs:complexType> </xs:element> The reason for the custom binding is so I can map user input from a pojo into the StwrdCompany object more easily, whether it be direct instantiation or through the use of Dozer for bean mapping. I was unable to successfully map between the objects without the custom binding. Finally, one other thing I tried was setting a globalBinding definition: <globalBindings generateValueClass="false"></globalBindings> This caused the server to through an argument mismatch exception since the Soap Message was using xs:string xml types instead of passing the defined complex types, so I abandoned that idea. Any insight into what is causing the MarshalException or how to go about solving the issue of calling the webservice and mapping these objects more easily, is greatly appreciated. I've been searching for days and I sadly think I am stumped.

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • Silverlight Chat WrapPanel Crash / Bug

    - by Matt
    I've been given the task to create a simple Silverlight chat box for two people. My control must adhere to the following requirements Scrollable Text must wrap if it's too long When a new item / message is added it must scroll that item into view Now I've successfully made a usercontrol to meet these requirements, but I've run into a possible bug / crash that I can't for the life of me fix. I'm looking for either a fix to the bug, or a different approach to creating a scrollable chat control. Here's the code I've been using. We'll start with my XAML for the chat window <ListBox x:Name="lbChatHistory" Grid.Row="0" Grid.Column="0" Grid.ColumnSpan="2" ScrollViewer.VerticalScrollBarVisibility="Visible" ScrollViewer.HorizontalScrollBarVisibility="Disabled" > <ListBox.ItemTemplate> <DataTemplate> <Grid Background="Beige"> <Grid.ColumnDefinitions> <ColumnDefinition Width="70"></ColumnDefinition> <ColumnDefinition Width="Auto"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock x:Name="lblPlayer" Foreground="{Binding ForeColor}" Text="{Binding Player}" Grid.Column="0"></TextBlock> <ContentPresenter Grid.Column="1" Width="200" Content="{Binding Message}" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> The idea is to add a new Item to the listbox. The Item (as layed out in the XAML) is a simple 2 column grid. One column for the username, and one column for the message. Now the "items" that I add to the ListBox is a custom class. It has three properties (Player, ForeColor, and Message) that I using binding on within my XAML Player is a string of the current user to display. ForeColor is just a foreground color preference. It helps distinguish the difference between messages. Message is a WrapPanel. I programmatically break the supplied string on the white space for each word. Then for each word, I add a new TextBlock element to the WrapPanel Here is the custom class. public class ChatMessage :DependencyObject, INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public static DependencyProperty PlayerProperty = DependencyProperty.Register( "Player", typeof( string ), typeof( ChatMessage ), new PropertyMetadata( new PropertyChangedCallback( OnPlayerPropertyChanged ) ) ); public static DependencyProperty MessageProperty = DependencyProperty.Register( "Message", typeof( WrapPanel ), typeof( ChatMessage ), new PropertyMetadata( new PropertyChangedCallback( OnMessagePropertyChanged ) ) ); public static DependencyProperty ForeColorProperty = DependencyProperty.Register( "ForeColor", typeof( SolidColorBrush ), typeof( ChatMessage ), new PropertyMetadata( new PropertyChangedCallback( OnForeColorPropertyChanged ) ) ); private static void OnForeColorPropertyChanged( DependencyObject d, DependencyPropertyChangedEventArgs e ) { ChatMessage c = d as ChatMessage; c.ForeColor = ( SolidColorBrush ) e.NewValue; } public ChatMessage() { Message = new WrapPanel(); ForeColor = new SolidColorBrush( Colors.White ); } private static void OnMessagePropertyChanged( DependencyObject d, DependencyPropertyChangedEventArgs e ) { ChatMessage c = d as ChatMessage; c.Message = ( WrapPanel ) e.NewValue; } private static void OnPlayerPropertyChanged( DependencyObject d, DependencyPropertyChangedEventArgs e ) { ChatMessage c = d as ChatMessage; c.Player = e.NewValue.ToString(); } public SolidColorBrush ForeColor { get { return ( SolidColorBrush ) GetValue( ForeColorProperty ); } set { SetValue( ForeColorProperty, value ); if(PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs( "ForeColor" )); } } public string Player { get { return ( string ) GetValue( PlayerProperty ); } set { SetValue( PlayerProperty, value ); if ( PropertyChanged != null ) PropertyChanged( this, new PropertyChangedEventArgs( "Player" ) ); } } public WrapPanel Message { get { return ( WrapPanel ) GetValue( MessageProperty ); } set { SetValue( MessageProperty, value ); if ( PropertyChanged != null ) PropertyChanged( this, new PropertyChangedEventArgs( "Message" ) ); } } } Lastly I add my items to the ListBox. Here's the simple method. It takes the above ChatMessage class as a parameter public void AddChatItem( ChatMessage msg ) { lbChatHistory.Items.Add( msg ); lbChatHistory.ScrollIntoView( msg ); } Now I've tested this and it all works. The problem I'm getting is when I use the scroll bar. You can scroll down using the side scroll bar or arrow keys, but when you scroll up Silverlight crashes. FireBug returns a ManagedRuntimeError #4004 with a XamlParseException. I'm soo close to having this control work, I can taste it! Any thoughts on what I should do or change? Is there a better approach than the one I've taken? Thanks in advance. UPDATE I've found an alternative solution using a ScrollViewer and an ItemsControl instead of a ListBox control. For the most part it's stable.

    Read the article

< Previous Page | 812 813 814 815 816 817 818 819 820 821 822 823  | Next Page >