Search Results

Search found 10662 results on 427 pages for 'parameter passing'.

Page 332/427 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Many network adapters at machine, need to find one that is used for traffic in Windows (from .net)

    - by viko
    My application use Web-service. I'm control from what workstation was request and for this send MAC-Address how parameter of all methods. But then I start testing application in real, I found workstations which have many network adapters - Ethernet, Wireless, Bluetooth. When I get MAC-address using next code: var networkAdapters = NetworkInterface.GetAllNetworkInterfaces(); if (networkAdapters == null || networkAdapters.Length == 0) return string.Empty; string address = string.Empty; foreach (var adapter in networkAdapters) { var a = adapter.GetPhysicalAddress(); if (a != null && a.ToString() != string.Empty) { address = a.ToString(); break; } } return address; Sometimes Web-service receive from workstation different MAC-Addresses, but I want get always only one MAC-address. Please, help me.

    Read the article

  • PHP DateTime accept multiple formats?

    - by John Smith
    I'm trying to construct a DateTime object with multiple accepted formats. According to the DateTime::createFromFormat docs, the first parameter (format) must be a string. I was wondering if there was a way to createFromFormats. In my case, I want the year for my format to be optional: DateTime::createFromFormat('Y-m-d', $date); DateTime::createFromFormat('m-d', $date); so that a user can input just 'm-d' and the year would be assumed 2013. If I wanted multiple accepted formats, would I have to call createFromFormat each time? Shortest thing for my scenario is: DateTime::createFromFormat('m-d', $date) ?: DateTime::createFromFormat('Y-m-d', $date);

    Read the article

  • C++ Reading and Editing pixels of a bitmap image

    - by BettyD
    I'm trying to create a program which reads an image in (starting with bitmaps because they're the easiest), searches through the image for a particular parameter (i.e. one pixel of (255, 0, 0) followed by one of (0, 0, 255)) and changes the pixels in some way (only within the program, not saving to the original file.) and then displays it. I'm using Windows, simple GDI for preference and I want to avoid custom libraries just because I want to really understand what I'm doing for this test program. Can anyone help? In any way? A website? Advice? Example code? Thanks

    Read the article

  • How do i return integers from a string ?

    - by kannan.ambadi
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Suppose you are passing a string(for e.g.: “My name has 1 K, 2 A and 3 N”)  which may contain integers, letters or special characters. I want to retrieve only numbers from the input string. We can implement it in many ways such as splitting the string into an array or by using TryParse method. I would like to share another idea, that’s by using Regular expressions. All you have to do is, create an instance of Regular Expression with a specified pattern for integer. Regular expression class defines a method called Split, which splits the specified input string based on the pattern provided during object initialization.     We can write the code as given below:   public static int[] SplitIdSeqenceValues(object combinedArgs)         {             var _argsSeperator = new Regex(@"\D+", RegexOptions.Compiled);               string[] splitedIntegers = _argsSeperator.Split(combinedArgs.ToString());               var args = new int[splitedIntegers.Length];               for (int i = 0; i < splitedIntegers.Length; i++)                 args[i] = MakeSafe.ToSafeInt32(splitedIntegers[i]);                           return args;         }    It would be better, if we set to RegexOptions.Compiled so that the regular expression will have performance boost by faster compilation.   Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Happy Programming  :))   

    Read the article

  • jQuery Time Entry with Time Navigation Keys

    - by Rick Strahl
    So, how do you display time values in your Web applications? Displaying date AND time values in applications is lot less standardized than date display only. While date input has become fairly universal with various date picker controls available, time entry continues to be a bit of a non-standardized. In my own applications I tend to use the jQuery UI DatePicker control for date entries and it works well for that. Here's an example: The date entry portion is well defined and it makes perfect sense to have a calendar pop up so you can pick a date from a rich UI when necessary. However, time values are much less obvious when it comes to displaying a UI or even just making time entries more useful. There are a slew of time picker controls available but other than adding some visual glitz, they are not really making time entry any easier. Part of the reason for this is that time entry is usually pretty simple. Clicking on a dropdown of any sort and selecting a value from a long scrolling list tends to take more user interaction than just typing 5 characters (7 if am/pm is used). Keystrokes can make Time Entry easier Time entry maybe pretty simple, but I find that adding a few hotkeys to handle date navigation can make it much easier. Specifically it'd be nice to have keys to: Jump to the current time (Now) Increase/decrease minutes Increase/decrease hours The timeKeys jQuery PlugIn Some time ago I created a small plugin to handle this scenario. It's non-visual other than tooltip that pops up when you press ? to display the hotkeys that are available: Try it Online The keys loosely follow the ancient Quicken convention of using the first and last letters of what you're increasing decreasing (ie. H to decrease, R to increase hours and + and - for the base unit or minutes here). All navigation happens via the keystrokes shown above, so it's all non-visual, which I think is the most efficient way to deal with dates. To hook up the plug-in, start with the textbox:<input type="text" id="txtTime" name="txtTime" value="12:05 pm" title="press ? for time options" /> Note the title which might be useful to alert people using the field that additional functionality is available. To hook up the plugin code is as simple as:$("#txtTime").timeKeys(); You essentially tie the plugin to any text box control. OptionsThe syntax for timeKeys allows for an options map parameter:$(selector).timeKeys(options); Options are passed as a parameter map object which can have the following properties: timeFormatYou can pass in a format string that allows you to format the date. The default is "hh:mm t" which is US time format that shows a 12 hour clock with am/pm. Alternately you can pass in "HH:mm" which uses 24 hour time. HH, hh, mm and t are translated in the format string - you can arrange the format as you see fit. callbackYou can also specify a callback function that is called when the date value has been set. This allows you to either re-format the date or perform post processing (such as displaying highlight if it's after a certain hour for example). Here's another example that uses both options:$("#txtTime").timeKeys({ timeFormat: "HH:mm", callback: function (time) { showStatus("new time is: " + time.toString() + " " + $(this).val() ); } }); The plugin code itself is fairly simple. It hooks the keydown event and checks for the various keys that affect time navigation which is straight forward. The bulk of the code however deals with parsing the time value and formatting the output using a Time class that implements parsing, formatting and time navigation methods. Here's the code for the timeKeys jQuery plug-in:/// <reference path="jquery.js" /> /// <reference path="ww.jquery.js" /> (function ($) { $.fn.timeKeys = function (options) { /// <summary> /// Attaches a set of hotkeys to time fields /// + Add minute - subtract minute /// H Subtract Hour R Add houR /// ? Show keys /// </summary> /// <param name="options" type="object"> /// Options: /// timeFormat: "hh:mm t" by default HH:mm alternate /// callback: callback handler after time assignment /// </param> /// <example> /// var proxy = new ServiceProxy("JsonStockService.svc/"); /// proxy.invoke("GetStockQuote",{symbol:"msft"},function(quote) { alert(result.LastPrice); },onPageError); ///</example> if (this.length < 1) return this; var opt = { timeFormat: "hh:mm t", callback: null } $.extend(opt, options); return this.keydown(function (e) { var $el = $(this); var time = new Time($el.val()); //alert($(this).val() + " " + time.toString() + " " + time.date.toString()); switch (e.keyCode) { case 78: // [N]ow time = new Time(new Date()); break; case 109: case 189: // - time.addMinutes(-1); break; case 107: case 187: // + time.addMinutes(1); break; case 72: //H time.addHours(-1); break; case 82: //R time.addHours(1); break; case 191: // ? if (e.shiftKey) $(this).tooltip("<b>N</b> Now<br/><b>+</b> add minute<br /><b>-</b> subtract minute<br /><b>H</b> Subtract Hour<br /><b>R</b> add hour", 4000, { isHtml: true }); return false; default: return true; } $el.val(time.toString(opt.timeFormat)); if (opt.callback) { // call async and set context in this element setTimeout(function () { opt.callback.call($el.get(0), time) }, 1); } return false; }); } Time = function (time, format) { /// <summary> /// Time object that can parse and format /// a time values. /// </summary> /// <param name="time" type="object"> /// A time value as a string (12:15pm or 23:01), a Date object /// or time value. /// /// </param> /// <param name="format" type="string"> /// Time format string: /// HH:mm (23:01) /// hh:mm t (11:01 pm) /// </param> /// <example> /// var time = new Time( new Date()); /// time.addHours(5); /// time.addMinutes(10); /// var s = time.toString(); /// /// var time2 = new Time(s); // parse with constructor /// var t = time2.parse("10:15 pm"); // parse with .parse() method /// alert( t.hours + " " + t.mins + " " + t.ampm + " " + t.hours25) ///</example> var _I = this; this.date = new Date(); this.timeFormat = "hh:mm t"; if (format) this.timeFormat = format; this.parse = function (time) { /// <summary> /// Parses time value from a Date object, or string in format of: /// 12:12pm or 23:01 /// </summary> /// <param name="time" type="any"> /// A time value as a string (12:15pm or 23:01), a Date object /// or time value. /// /// </param> if (!time) return null; // Date if (time.getDate) { var t = {}; var d = time; t.hours24 = d.getHours(); t.mins = d.getMinutes(); t.ampm = "am"; if (t.hours24 > 11) { t.ampm = "pm"; if (t.hours24 > 12) t.hours = t.hours24 - 12; } time = t; } if (typeof (time) == "string") { var parts = time.split(":"); if (parts < 2) return null; var time = {}; time.hours = parts[0] * 1; time.hours24 = time.hours; time.mins = parts[1].toLowerCase(); if (time.mins.indexOf("am") > -1) { time.ampm = "am"; time.mins = time.mins.replace("am", ""); if (time.hours == 12) time.hours24 = 0; } else if (time.mins.indexOf("pm") > -1) { time.ampm = "pm"; time.mins = time.mins.replace("pm", ""); if (time.hours < 12) time.hours24 = time.hours + 12; } time.mins = time.mins * 1; } _I.date.setMinutes(time.mins); _I.date.setHours(time.hours24); return time; }; this.addMinutes = function (mins) { /// <summary> /// adds minutes to the internally stored time value. /// </summary> /// <param name="mins" type="number"> /// number of minutes to add to the date /// </param> _I.date.setMinutes(_I.date.getMinutes() + mins); } this.addHours = function (hours) { /// <summary> /// adds hours the internally stored time value. /// </summary> /// <param name="hours" type="number"> /// number of hours to add to the date /// </param> _I.date.setHours(_I.date.getHours() + hours); } this.getTime = function () { /// <summary> /// returns a time structure from the currently /// stored time value. /// Properties: hours, hours24, mins, ampm /// </summary> return new Time(new Date()); h } this.toString = function (format) { /// <summary> /// returns a short time string for the internal date /// formats: 12:12 pm or 23:12 /// </summary> /// <param name="format" type="string"> /// optional format string for date /// HH:mm, hh:mm t /// </param> if (!format) format = _I.timeFormat; var hours = _I.date.getHours(); if (format.indexOf("t") > -1) { if (hours > 11) format = format.replace("t", "pm") else format = format.replace("t", "am") } if (format.indexOf("HH") > -1) format = format.replace("HH", hours.toString().padL(2, "0")); if (format.indexOf("hh") > -1) { if (hours > 12) hours -= 12; if (hours == 0) hours = 12; format = format.replace("hh", hours.toString().padL(2, "0")); } if (format.indexOf("mm") > -1) format = format.replace("mm", _I.date.getMinutes().toString().padL(2, "0")); return format; } // construction if (time) this.time = this.parse(time); } String.prototype.padL = function (width, pad) { if (!width || width < 1) return this; if (!pad) pad = " "; var length = width - this.length if (length < 1) return this.substr(0, width); return (String.repeat(pad, length) + this).substr(0, width); } String.repeat = function (chr, count) { var str = ""; for (var x = 0; x < count; x++) { str += chr }; return str; } })(jQuery); The plugin consists of the actual plugin and the Time class which handles parsing and formatting of the time value via the .parse() and .toString() methods. Code like this always ends up taking up more effort than the actual logic unfortunately. There are libraries out there that can handle this like datejs or even ww.jquery.js (which is what I use) but to keep the code self contained for this post the plugin doesn't rely on external code. There's one optional exception: The code as is has one dependency on ww.jquery.js  for the tooltip plugin that provides the small popup for all the hotkeys available. You can replace that code with some other mechanism to display hotkeys or simply remove it since that behavior is optional. While we're at it: A jQuery dateKeys plugIn Although date entry tends to be much better served with drop down calendars to pick dates from, often it's also easier to pick dates using a few simple hotkeys. Navigation that uses + - for days and M and H for MontH navigation, Y and R for YeaR navigation are a quick way to enter dates without having to resort to using a mouse and clicking around to what you want to find. Note that this plugin does have a dependency on ww.jquery.js for the date formatting functionality.$.fn.dateKeys = function (options) { /// <summary> /// Attaches a set of hotkeys to date 'fields' /// + Add day - subtract day /// M Subtract Month H Add montH /// Y Subtract Year R Add yeaR /// ? Show keys /// </summary> /// <param name="options" type="object"> /// Options: /// dateFormat: "MM/dd/yyyy" by default "MMM dd, yyyy /// callback: callback handler after date assignment /// </param> /// <example> /// var proxy = new ServiceProxy("JsonStockService.svc/"); /// proxy.invoke("GetStockQuote",{symbol:"msft"},function(quote) { alert(result.LastPrice); },onPageError); ///</example> if (this.length < 1) return this; var opt = { dateFormat: "MM/dd/yyyy", callback: null }; $.extend(opt, options); return this.keydown(function (e) { var $el = $(this); var d = new Date($el.val()); if (!d) d = new Date(1900, 0, 1, 1, 1); var month = d.getMonth(); var year = d.getFullYear(); var day = d.getDate(); switch (e.keyCode) { case 84: // [T]oday d = new Date(); break; case 109: case 189: d = new Date(year, month, day - 1); break; case 107: case 187: d = new Date(year, month, day + 1); break; case 77: //M d = new Date(year, month - 1, day); break; case 72: //H d = new Date(year, month + 1, day); break; case 191: // ? if (e.shiftKey) $el.tooltip("<b>T</b> Today<br/><b>+</b> add day<br /><b>-</b> subtract day<br /><b>M</b> subtract Month<br /><b>H</b> add montH<br/><b>Y</b> subtract Year<br/><b>R</b> add yeaR", 5000, { isHtml: true }); return false; default: return true; } $el.val(d.formatDate(opt.dateFormat)); if (opt.callback) // call async setTimeout(function () { opt.callback.call($el.get(0),d); }, 10); return false; }); } The logic for this plugin is similar to the timeKeys plugin, but it's a little simpler as it tries to directly parse the date value from a string via new Date(inputString). As mentioned it also uses a helper function from ww.jquery.js to format dates which removes the logic to perform date formatting manually which again reduces the size of the code. And the Key is… I've been using both of these plugins in combination with the jQuery UI datepicker for datetime values and I've found that I rarely actually pop up the date picker any more. It's just so much more efficient to use the hotkeys to navigate dates. It's still nice to have the picker around though - it provides the expected behavior for date entry. For time values however I can't justify the UI overhead of a picker that doesn't make it any easier to pick a time. Most people know how to type in a time value and if they want shortcuts keystrokes easily beat out any pop up UI. Hopefully you'll find this as useful as I have found it for my code. Resources Online Sample Download Sample Project © Rick Strahl, West Wind Technologies, 2005-2011Posted in jQuery  HTML   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamic Paging and Sorting

    - by Ricardo Peres
    Since .NET 3.5 brought us LINQ and expressions, I became a great fan of these technologies. There are times, however, when strong typing cannot be used - for example, when you are developing an ObjectDataSource and you need to do paging having just a column name, a page index and a page size, so I set out to fix this. Yes, I know about Dynamic LINQ, and even talked on it previously, but there's no need to add this extra assembly. So, without further delay, here's the code, in both generic and non-generic versions: public static IList ApplyPagingAndSorting(IEnumerable enumerable, Type elementType, Int32 pageSize, Int32 pageIndex, params String [] orderByColumns) { MethodInfo asQueryableMethod = typeof(Queryable).GetMethods(BindingFlags.Static | BindingFlags.Public).Where(m = (m.Name == "AsQueryable") && (m.ContainsGenericParameters == false)).Single(); IQueryable query = (enumerable is IQueryable) ? (enumerable as IQueryable) : asQueryableMethod.Invoke(null, new Object [] { enumerable }) as IQueryable; if ((orderByColumns != null) && (orderByColumns.Length 0)) { PropertyInfo orderByProperty = elementType.GetProperty(orderByColumns [ 0 ]); MemberExpression member = Expression.MakeMemberAccess(Expression.Parameter(elementType, "n"), orderByProperty); LambdaExpression orderBy = Expression.Lambda(member, member.Expression as ParameterExpression); MethodInfo orderByMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "OrderBy").ToArray() [ 0 ].MakeGenericMethod(elementType, orderByProperty.PropertyType); query = orderByMethod.Invoke(null, new Object [] { query, orderBy }) as IQueryable; if (orderByColumns.Length 1) { MethodInfo thenByMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "ThenBy").ToArray() [ 0 ].MakeGenericMethod(elementType, orderByProperty.PropertyType); PropertyInfo thenByProperty = null; MemberExpression thenByMember = null; LambdaExpression thenBy = null; for (Int32 i = 1; i 0) { MethodInfo takeMethod = typeof(Queryable).GetMethod("Take", BindingFlags.Public | BindingFlags.Static).MakeGenericMethod(elementType); MethodInfo skipMethod = typeof(Queryable).GetMethod("Skip", BindingFlags.Public | BindingFlags.Static).MakeGenericMethod(elementType); query = skipMethod.Invoke(null, new Object [] { query, pageSize * pageIndex }) as IQueryable; query = takeMethod.Invoke(null, new Object [] { query, pageSize }) as IQueryable; } MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList", BindingFlags.Static | BindingFlags.Public).MakeGenericMethod(elementType); IList list = toListMethod.Invoke(null, new Object [] { query }) as IList; return (list); } public static List ApplyPagingAndSorting(IEnumerable enumerable, Int32 pageSize, Int32 pageIndex, params String [] orderByColumns) { return (ApplyPagingAndSorting(enumerable, typeof(T), pageSize, pageIndex, orderByColumns) as List); } List list = new List { new DateTime(2010, 1, 1), new DateTime(1999, 1, 12), new DateTime(1900, 10, 10), new DateTime(1900, 2, 20), new DateTime(2012, 5, 5), new DateTime(2012, 1, 20) }; List sortedList = ApplyPagingAndSorting(list, 3, 0, "Year", "Month", "Day"); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • ESB Toolkit 2.0 EndPointConfig (HTTPS with WCF-BasicHttp and the ESB Toolkit 2.0)

    - by Andy Morrison
    Earlier this week I had an ESB endpoint (Off-Ramp in ESB parlance) that I was sending to over http using WCF-BasicHttp.  I needed to switch the protocol to https: which I did by changing my UDDI Binding over to https:  No problem from a management perspective; however, when I tried to run the process I saw this exception: Event Type:                     Error Event Source:                BizTalk Server 2009 Event Category:            BizTalk Server 2009 Event ID:   5754 Date:                                    3/10/2010 Time:                                   2:58:23 PM User:                                    N/A Computer:                       XXXXXXXXX Description: A message sent to adapter "WCF-BasicHttp" on send port "SPDynamic.XXX.SR" with URI "https://XXXXXXXXX.com/XXXXXXX/whatever.asmx" is suspended.  Error details: System.ArgumentException: The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via    at System.ServiceModel.Channels.TransportChannelFactory`1.ValidateScheme(Uri via)    at System.ServiceModel.Channels.HttpChannelFactory.ValidateCreateChannelParameters(EndpointAddress remoteAddress, Uri via)    at System.ServiceModel.Channels.HttpChannelFactory.OnCreateChannel(EndpointAddress remoteAddress, Uri via)    at System.ServiceModel.Channels.ChannelFactoryBase`1.InternalCreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ChannelFactoryBase`1.CreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.ServiceChannelFactoryOverRequest.CreateInnerChannelBinder(EndpointAddress to, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.CreateServiceChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.CreateChannel(Type channelType, EndpointAddress address, Uri via)    at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.ChannelFactory`1.CreateChannel()    at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.GetChannel[TChannel](IBaseMessage bizTalkMessage, ChannelFactory`1& cachedFactory)    at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.SendMessage(IBaseMessage bizTalkMessage)  MessageId:  {1170F4ED-550F-4F7E-B0E0-1EE92A25AB10}  InstanceID: {1640C6C6-CA9C-4746-AEB0-584FDF7BB61E} I knew from a previous experience that I likely needed to set the SecurityMode setting for my Send Port.  But how do you do this for a Dynamic port (which I was using since this is an ESB solution)? Within the UDDI portal you have to add an additional Instance Info to your Binding named: EndPointConfig  Then you have to set its value to:  SecurityMode=Transport Like this:    The EndPointConfig is how the ESB Toolkit 2.0 provides extensibility for the various transports.  To see what the key-value pair options are for a given transport, open up an itinerary and change one of your resolvers to a “static” resolver by setting the “Resolver Implementation” to Static.  Then select a “Transport Name” ”, for instance to WCF-BasicHttp.  At this point you can then click on the “EndPoint Configuration” property for to see an adapter/ramp specific properties dialog (key-value pairs.)    Here’s the dialog that popped up for WCF-BasicHttp:   I simply set the SecurityMode to Transport.  Please note that you will get different properties within the window depending on the Transport Name you select for the resolver. When you are done with your settings, export the itinerary to disk and find that xml; then find that resolver’s xml within that file.  It will look like endpointConfig=SecurityMode=Transport in this case.  Note that if you set additional properties you will have additional key-value pairs after endpointConfig= Copy that string and paste it into the UDDI portal for you Binding’s EndPointConfig Instance Info value.

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Why Software Sucks...and What You Can Do About It – book review

    - by DigiMortal
        How do our users see the products we are writing for them and how happy they are with our work? Are they able to get their work done without fighting with cool features and crashes or are they just switching off resistance part of their brain to survive our software? Yeah, the overall picture of software usability landscape is not very nice. Okay, it is not even nice. But, fortunately, Why Software Sucks...and What You Can Do About It by David S. Platt explains everything. Why Software Sucks… is book for software users but I consider it as a-must reading also for developers and specially for their managers whose politics often kills all usability topics as soon as they may appear. For managers usability is soft topic that can be manipulated the way it is best in current state of project. Although developers are not UI designers and usability experts they are still very often forced to deal with these topics and this is how usability problems start (of course, also designers are able to produce designs that are stupid and too hard to use for users, but this blog here is about development). I found this book to be very interesting and funny reading. It is not humor book but it explains you all so you remember later very well what you just read. It took me about three evenings to go through this book and I am still enjoying what I found and how author explains our weird young working field to end users. I suggest this book to all developers – while you are demanding your management to hire or outsource usability expert you are at least causing less pain to end users. So, go and buy this book, just like I did. And… they thanks to mr. Platt :) There is one book more I suggest you to read if you are interested in usability - Don't Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition by Steve Krug. Editorial review from Amazon Today’s software sucks. There’s no other good way to say it. It’s unsafe, allowing criminal programs to creep through the Internet wires into our very bedrooms. It’s unreliable, crashing when we need it most, wiping out hours or days of work with no way to get it back. And it’s hard to use, requiring large amounts of head-banging to figure out the simplest operations. It’s no secret that software sucks. You know that from personal experience, whether you use computers for work or personal tasks. In this book, programming insider David Platt explains why that’s the case and, more importantly, why it doesn’t have to be that way. And he explains it in plain, jargon-free English that’s a joy to read, using real-world examples with which you’re already familiar. In the end, he suggests what you, as a typical user, without a technical background, can do about this sad state of our software—how you, as an informed consumer, don’t have to take the abuse that bad software dishes out. As you might expect from the book’s title, Dave’s expose is laced with humor—sometimes outrageous, but always dead on. You’ll laugh out loud as you recall incidents with your own software that made you cry. You’ll slap your thigh with the same hand that so often pounded your computer desk and wished it was a bad programmer’s face. But Dave hasn’t written this book just for laughs. He’s written it to give long-overdue voice to your own discovery—that software does, indeed, suck, but it shouldn’t. Table of contents Acknowledgments xiii Introduction Chapter 1: Who’re You Calling a Dummy? Where We Came From Why It Still Sucks Today Control versus Ease of Use I Don’t Care How Your Program Works A Bad Feature and a Good One Stopping the Proceedings with Idiocy Testing on Live Animals Where We Are and What You Can Do Chapter 2: Tangled in the Web Where We Came From How It Works Why It Still Sucks Today Client-Centered Design versus Server-Centered Design Where’s My Eye Opener? It’s Obvious—Not! Splash, Flash, and Animation Testing on Live Animals What You Can Do about It Chapter 3: Keep Me Safe The Way It Was Why It Sucks Today What Programmers Need to Know, but Don’t A Human Operation Budgeting for Hassles Users Are Lazy Social Engineering Last Word on Security What You Can Do Chapter 4: Who the Heck Are You? Where We Came From Why It Still Sucks Today Incompatible Requirements OK, So Now What? Chapter 5: Who’re You Looking At? Yes, They Know You Why It Sucks More Than Ever Today Users Don’t Know Where the Risks Are What They Know First Milk You with Cookies? Privacy Policy Nonsense Covering Your Tracks The Google Conundrum Solution Chapter 6: Ten Thousand Geeks, Crazed on Jolt Cola See Them in Their Native Habitat All These Geeks Who Speaks, and When, and about What Selling It The Next Generation of Geeks—Passing It On Chapter 7: Who Are These Crazy Bastards Anyway? Homo Logicus Testosterone Poisoning Control and Contentment Making Models Geeks and Jocks Jargon Brains and Constraints Seven Habits of Geeks Chapter 8: Microsoft: Can’t Live With ’Em and Can’t Live Without ’Em They Run the World Me and Them Where We Came From Why It Sucks Today Damned if You Do, Damned if You Don’t We Love to Hate Them Plus ça Change Growing-Up Pains What You Can Do about It The Last Word Chapter 9: Doing Something About It 1. Buy 2. Tell 3. Ridicule 4. Trust 5. Organize Epilogue About the Author

    Read the article

  • On Contract Employment

    - by kerry
    I am going to post about something I don’t post about a lot, the business side of development.  Scott at the antipimp does a good job of explaining how contracts work from a business perspective.  I am going to give a view from the ground. First, a little background on myself.  I have recently taken a 6 month contract after about 8 years of fulltime employment.  I have 2 kids, and a stay at home wife.  I took this contract opportunity because I wanted to try it on for size.  I have always wondered whether I would like doing contracts over fulltime employment.  So, in keeping with the theme of this blog I will write this down now so that I may reference it later. ALL jobs are temporary! Right now you may not realize it, most people simply ignore it, but EVERY job is temporary.  Everyone should be planning for life after the money stops coming in.  Sadly, most people do not.  Contracting pushes this issue to the forefront, making you deal with it.  After a month on a contract, I am happy to say that I am saving more than I ever saved in a fulltime position.  Hopefully, I will be ready in case of an extended window of unemployment between contracts. Networking I find it extremely gratifying getting to know people.  It is especially beneficial when moving to a new city.  What better way to go out and meet people in your field than to work a few contracts?  6 months of working beside someone and you get to know them pretty well.  This is one of my favorite aspects. Technical Agility Moving between IS shops takes (or molds you into) a flexible person.  You have to be able to go in and hit the ground running.  This means you need to be able to sit down and start work on a large codebase working in a language that you may or may not have that much experience in.  It is also an excellent way to learn new languages and broaden your technical skill set.  I took my current position to learn Ruby.  A month ago, I had only used it in passing, but now I am using it every day.  It’s a tragedy in this field when people start coding for the joy and love of coding, then become deeply entrenched in their companies methods and technologies that it becomes a just a job. Less Stress I am not talking about the kind of stress you get from a jackass boss.  I am talking about the kind of stress I (or others) experience about planning and future proofing your code.  Not saying I stay up at night worrying whether we have done it right, if that code I wrote today is going to bite me later, but it still creeps around in the dark recesses of my mind.  Careful though, I am not suggesting you write sloppy code; just defer any large architectural or design decisions to the ‘code owners’. Flexible Scheduling It makes me very happy to be able to cut out a few hours early on a Friday (provided the work is done) and start the weekend off early by going to the pool, or taking the kids to the park.  Contracting provides you this opportunity (mileage may vary).  Most of your fulltime brethren will not care, they will be jealous that they’re corporate policy prevents them from doing the same.  However, you must be mindful of situations where this is not appropriate, and don’t over do it.  You are there to work after all. Affirmation of Need Have you ever been stuck in a job where you thought you were underpaid?  Have you ever been in a position where you felt like there was not enough workload for you?  This is not a problem for contractors.  When you start a contract it is understood that you are needed, and the employer knows that you are happy with the terms. Contracting may not be for everyone.  But, if you develop a relationship with a good consulting firm, keep their clients happy, then they will keep you happy.  They want you to work almost as much as you do.  Just be sure and plan financially for any windows of unemployment.

    Read the article

  • New .NET Library for Accessing the Survey Monkey API

    - by Ben Emmett
    I’ve used Survey Monkey’s API for a while, and though it’s pretty powerful, there’s a lot of boilerplate each time it’s used in a new project, and the json it returns needs a bunch of processing to be able to use the raw information. So I’ve finally got around to releasing a .NET library you can use to consume the API more easily. The main advantages are: Only ever deal with strongly-typed .NET objects, making everything much more robust and a lot faster to get going Automatically handles things like rate-limiting and paging through results Uses combinations of endpoints to get all relevant data for you, and processes raw response data to map responses to questions To start, either install it using NuGet with PM> Install-Package SurveyMonkeyApi (easier option), or grab the source from https://github.com/bcemmett/SurveyMonkeyApi if you prefer to build it yourself. You’ll also need to have signed up for a developer account with Survey Monkey, and have both your API key and an OAuth token. A simple usage would be something like: string apiKey = "KEY"; string token = "TOKEN"; var sm = new SurveyMonkeyApi(apiKey, token); List<Survey> surveys = sm.GetSurveyList(); The surveys object is now a list of surveys with all the information available from the /surveys/get_survey_list API endpoint, including the title, id, date it was created and last modified, language, number of questions / responses, and relevant urls. If there are more than 1000 surveys in your account, the library pages through the results for you, making multiple requests to get a complete list of surveys. All the filtering available in the API can be controlled using .NET objects. For example you might only want surveys created in the last year and containing “pineapple” in the title: var settings = new GetSurveyListSettings { Title = "pineapple", StartDate = DateTime.Now.AddYears(-1) }; List<Survey> surveys = sm.GetSurveyList(settings); By default, whenever optional fields can be requested with a response, they will all be fetched for you. You can change this behaviour if for some reason you explicitly don’t want the information, using var settings = new GetSurveyListSettings { OptionalData = new GetSurveyListSettingsOptionalData { DateCreated = false, AnalysisUrl = false } }; Survey Monkey’s 7 read-only endpoints are supported, and the other 4 which make modifications to data might be supported in the future. The endpoints are: Endpoint Method Object returned /surveys/get_survey_list GetSurveyList() List<Survey> /surveys/get_survey_details GetSurveyDetails() Survey /surveys/get_collector_list GetCollectorList() List<Collector> /surveys/get_respondent_list GetRespondentList() List<Respondent> /surveys/get_responses GetResponses() List<Response> /surveys/get_response_counts GetResponseCounts() Collector /user/get_user_details GetUserDetails() UserDetails /batch/create_flow Not supported Not supported /batch/send_flow Not supported Not supported /templates/get_template_list Not supported Not supported /collectors/create_collector Not supported Not supported The hierarchy of objects the library can return is Survey List<Page> List<Question> QuestionType List<Answer> List<Item> List<Collector> List<Response> Respondent List<ResponseQuestion> List<ResponseAnswer> Each of these classes has properties which map directly to the names of properties returned by the API itself (though using PascalCasing which is more natural for .NET, rather than the snake_casing used by SurveyMonkey). For most users, Survey Monkey imposes a rate limit of 2 requests per second, so by default the library leaves at least 500ms between requests. You can request higher limits from them, so if you want to change the delay between requests just use a different constructor: var sm = new SurveyMonkeyApi(apiKey, token, 200); //200ms delay = 5 reqs per sec There’s a separate cap of 1000 requests per day for each API key, which the library doesn’t currently enforce, so if you think you’ll be in danger of exceeding that you’ll need to handle it yourself for now.  To help, you can see how many requests the current instance of the SurveyMonkeyApi object has made by reading its RequestsMade property. If the library encounters any errors, including communicating with the API, it will throw a SurveyMonkeyException, so be sure to handle that sensibly any time you use it to make calls. Finally, if you have a survey (or list of surveys) obtained using GetSurveyList(), the library can automatically fill in all available information using sm.FillMissingSurveyInformation(surveys); For each survey in the list, it uses the other endpoints to fill in the missing information about the survey’s question structure, respondents, and responses. This results in at least 5 API calls being made per survey, so be careful before passing it a large list. It also joins up the raw response information to the survey’s question structure, so that for each question in a respondent’s set of replies, you can access a ProcessedAnswer object. For example, a response to a dropdown question (from the /surveys/get_responses endpoint) might be represented in json as { "answers": [ { "row": "9384627365", } ], "question_id": "615487516" } Separately, the question’s structure (from the /surveys/get_survey_details endpoint) might have several possible answers, one of which might look like { "text": "Fourth item in dropdown list", "visible": true, "position": 4, "type": "row", "answer_id": "9384627365" } The library understands how this mapping works, and uses that to give you the following ProcessedAnswer object, which first describes the family and type of question, and secondly gives you the respondent’s answers as they relate to the question. Survey Monkey has many different question types, with 11 distinct data structures, each of which are supported by the library. If you have suggestions or spot any bugs, let me know in the comments, or even better submit a pull request .

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITYSELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()))CREATE TABLE EmployeeLog( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE())) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITY SELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) CREATE TABLE EmployeeLog ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Silverlight Cream for March 10, 2010 -- #810

    - by Dave Campbell
    In this Issue: Andrea Boschin, Jeremy Likness(-2-), Andrew Veresov, Nokola, SilverLaw, Gill Cleeren, Jim Wightman and Jeremy Likness, Viktor Larsson(-2-), and Walter Ferrari. Shoutouts: Viktor Larsson has a post up about Silverlight Market Penetration ... hope to meet you at MIX10, Viktor! Gergely Orosz has posted the Slides and code for the presentation “An Introduction to Silverlight” It appears that if I miss a day, I can pretty much do an all-submittal post :) From SilverlightCream.com: Writing an AsyncLoader to enqueue long running operations Andrea Boschin has a tutorial on SilverlightShow where he's building up an asynch service to deal with a long-running app on the server. MVVM with MEF in Silverlight: Video Tutorial Jeremy Likness has a video tutorial up for helping beginners wire up MVVM and MEF to Silverlight. Source code for the app in the video is downloadable. MVVM with MEF in Silverlight Video Tutorial Part 2: Plugins and Metadata In part 2, Jeremy Likness redesigns the app using metadata to turn the shapes into objects, and then show how easy it is to add a new plugin... and the source for the app is downloadable. Binding a Converter Parameter Andrew Veresov has a nice code-filled solution up for those times that you need to bind a ConverterParameter value. EasyPainter: Lion Hair styling Nokola has not been idle with Easy Painter... now he's added "Lion Hair" to the list of stylings you can apply... guess if you want to change someone's 'mane' ... sorry! Twisting Navigation - Silverlight 3 SilverLaw has another control up - a "Twisting Navigation" control... very cool :) ... and since I'm behind the curve, he already has an update in the Expression Gallery as noted in his post, and a video tutorial on implementing it in an application... and if you understand German, turn up the sound :) Uploading and downloading images using a WCF service with Silverlight Gill Cleeren has a tutorial up at SilverlightShow on uploading and downloading images using WCF Services in Silverlight New Windows Phone 7 Community Developer Hub Jim Wightman and Jeremy Likness have a very cool Silverlight page up where you can paste the URL of your XAP in and have it display in a "Windows 7 Series Phone" ... and that's all I'm saying about that. XAML Transformation 101 Viktor Larsson is discussing Transforms in XAML and has a nice tutorial up that is easily the beginning of a carousel... you may also want to check out his other posts... I'm adding him to my list. Silverlight 4 Webcam Demo In this post, Viktor Larsson has a tutorial up for using the WebCam. This is from a beginner perspective, so if you haven't jumped in, now's a good time. How to extend Bing Maps Silverlight with an elevation profile graph - Part 1 Walter Ferrari has a post up at SilverlightShow discussing extensions to BingMaps such as creating routes using GeoCoding and Route Services plus drawing lines on the maps and getting coordinates of the points. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • Rendering ASP.NET MVC Razor Views outside of MVC revisited

    - by Rick Strahl
    Last year I posted a detailed article on how to render Razor Views to string both inside of ASP.NET MVC and outside of it. In that article I showed several different approaches to capture the rendering output. The first and easiest is to use an existing MVC Controller Context to render a view by simply passing the controller context which is fairly trivial and I demonstrated a simple ViewRenderer class that simplified the process down to a couple lines of code. However, if no Controller Context is available the process is not quite as straight forward and I referenced an old, much more complex example that uses my RazorHosting library, which is a custom self-contained implementation of the Razor templating engine that can be hosted completely outside of ASP.NET. While it works inside of ASP.NET, it’s an awkward solution when running inside of ASP.NET, because it requires a bit of setup to run efficiently.Well, it turns out that I missed something in the original article, namely that it is possible to create a ControllerContext, if you have a controller instance, even if MVC didn’t create that instance. Creating a Controller Instance outside of MVCThe trick to make this work is to create an MVC Controller instance – any Controller instance – and then configure a ControllerContext through that instance. As long as an HttpContext.Current is available it’s possible to create a fully functional controller context as Razor can get all the necessary context information from the HttpContextWrapper().The key to make this work is the following method:/// <summary> /// Creates an instance of an MVC controller from scratch /// when no existing ControllerContext is present /// </summary> /// <typeparam name="T">Type of the controller to create</typeparam> /// <returns>Controller Context for T</returns> /// <exception cref="InvalidOperationException">thrown if HttpContext not available</exception> public static T CreateController<T>(RouteData routeData = null) where T : Controller, new() { // create a disconnected controller instance T controller = new T(); // get context wrapper from HttpContext if available HttpContextBase wrapper = null; if (HttpContext.Current != null) wrapper = new HttpContextWrapper(System.Web.HttpContext.Current); else throw new InvalidOperationException( "Can't create Controller Context if no active HttpContext instance is available."); if (routeData == null) routeData = new RouteData(); // add the controller routing if not existing if (!routeData.Values.ContainsKey("controller") && !routeData.Values.ContainsKey("Controller")) routeData.Values.Add("controller", controller.GetType().Name .ToLower() .Replace("controller", "")); controller.ControllerContext = new ControllerContext(wrapper, routeData, controller); return controller; }This method creates an instance of a Controller class from an existing HttpContext which means this code should work from anywhere within ASP.NET to create a controller instance that’s ready to be rendered. This means you can use this from within an Application_Error handler as I needed to or even from within a WebAPI controller as long as it’s running inside of ASP.NET (ie. not self-hosted). Nice.So using the ViewRenderer class from the previous article I can now very easily render an MVC view outside of the context of MVC. Here’s what I ended up in my Application’s custom error HttpModule: protected override void OnDisplayError(WebErrorHandler errorHandler, ErrorViewModel model) { var Response = HttpContext.Current.Response; Response.ContentType = "text/html"; Response.StatusCode = errorHandler.OriginalHttpStatusCode; var context = ViewRenderer.CreateController<ErrorController>().ControllerContext; var renderer = new ViewRenderer(context); string html = renderer.RenderView("~/Views/Shared/GenericError.cshtml", model); Response.Write(html); }That’s pretty sweet, because it’s now possible to use ViewRenderer just about anywhere in any ASP.NET application, not only inside of controller code. This also allows the constructor for the ViewRenderer from the last article to work without a controller context parameter, using a generic view as a base for the controller context when not passed:public ViewRenderer(ControllerContext controllerContext = null) { // Create a known controller from HttpContext if no context is passed if (controllerContext == null) { if (HttpContext.Current != null) controllerContext = CreateController<ErrorController>().ControllerContext; else throw new InvalidOperationException( "ViewRenderer must run in the context of an ASP.NET " + "Application and requires HttpContext.Current to be present."); } Context = controllerContext; }In this case I use the ErrorController class which is a generic controller instance that exists in the same assembly as my ViewRenderer class and that works just fine since ‘generically’ rendered views tend to not rely on anything from the controller other than the model which is explicitly passed.While these days most of my apps use MVC I do still have a number of generic pieces in most of these applications where Razor comes in handy. This includes modules like the above, which when they error often need to display error output. In other cases I need to generate string template output for emailing or logging data to disk. Being able to render simply render an arbitrary View to and pass in a model makes this super nice and easy at least within the context of an ASP.NET application!You can check out the updated ViewRenderer class below to render your ‘generic views’ from anywhere within your ASP.NET applications. Hope some of you find this useful.ResourcesViewRenderer Class in Westwind.Web.Mvc Library (Github)Original ViewRenderer ArticleRazor Hosting Library (GitHub)Original Razor Hosting Article© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Agile Testing Days 2012 – Day 1 – The birth of the #unicorn…

    - by Chris George
    Still riding the high from the tutorial day, I arrived at the conference venue eager to get cracking with the days talks. The opening Keynote was “Disciplined Agile Delivery: The Foundation for Scaling Agile” presented by Scott Ambler. The general ideas behind the methodology such as not re-inventing the wheel, and being goal driven, not prescriptive in how you work certainly struck chords with how we are trying to work in my team. Scott made some interesting observations about how scrum is quite prescriptive and is this really agile? I agreed with quite a few of his points on how what works for one team may not work for another. How a team works should be driven by context and reflection, not process and prescription. However was somewhat dubious about some of the statistics he rolled out towards the end. However, out of this keynote was born something that was to transcend this one presentation. During the talk, Scott mentioned on more than one occasion “In the real world”, and at one point made reference to people living in the land of unicorns and rainbows. The challenge was then laid down on twitter for all speakers to include a unicorn in their presentations… and for the most part this happened! It became an identity for this years conference, and I’m sure something that any attendee will always associate with Agile Testing Days 2012! Following this keynote, I attended “Going agile with Automated GUI Testing – Some personal insights” by Jan Zdunek from codecentric on the vendor track. My speciality is test automation, and in particular GUI testing, so this drew me to this talk more than the others. Thankfully, it was made clear from the very start that this was not peddling any particular product (even though it was on the vendor track), and Jan faithfully stuck to that. Most of the content was not new to me, but it was really comforting to hear someone else with very similar experiences to my own. In particular, things like how GUI testing is hard and is not a silver bullet; how record & replay is NOT a good thing to do (which drew a somewhat inflammatory tweet from an automation company when I tweeted that!). Something that I have started hearing around the place, and has certainly been murmuring at work is to push more of the automation coding onto the developers. After all they are the coding experts. I agree with this to a degree, but I personally enjoy coding and find it very rewarding doing so, therefore I’d be reluctant to give it up. I think there are some better alternatives such as pairing with a developer. Lastly, Jan mentioned, almost in passing, that we should consider virtualisation for gui testing for covering configuration combinations. On my project we’ve been running our win32/.NET GUI tests in cloud virtualisation for a couple of years now… I really should write about that! After lunch the second keynote of the day was by Lisa Crispin and Janet Gregory,”Myths about Agile Testing, De-Bunked”. It started off well… with the two ladies donning Medusa style head bands whilst they disbanding several myths about agile testing! I got the impression that it was perhaps not as slick as they would have liked, but then Janet was suffering with a very sore throat so kept losing her voice. Nevertheless, the presentation was captivating, and they debunked several myths such as : “Testing is dead”, “Testers must write code”, “Agile teams always deliver faster”. I didn’t take many notes for this because it was being recorded, but unfortunately the recordings have not been posted yet so I’ll write more about this when they are. The TestLab was held during a somewhat free for all time during most of the afternoon. It looked intriguing and proved to be one of the surprising experiences of the conference for me. Run by James Lyndsay and Bart Knaack, it consisted of a number of ‘stations’ that offered different testing problems. I opted for testing a mathematical drawing app call Geogebra, the task being to pair up and exploratory test it. After an allotted time, we discussed issues we’d found and decided if we wanted to continue ‘playing’ to which we all agreed! It was fun! The last track talk of the day was “Developers Exploratory Testing – Raising the bar” by Sigge Birgisson. One of the teams at Red Gate have tried Dev or Team exploratory testing a couple of times, and I was really interested to go to the presentation that prompted that. I was not disappointed! Sigge gave a first class presentation, and not only explained what DET was all about, but also how to go about implementing it. Little tips like calling it a ‘workshop’ rather than ‘testing’ I can really see working! Monday evening saw the presentation of the award for the Most Influential Agile Testing Professional Person go to a much deserved Lisa Crispin. The evening was great, with acrobatics, magic and music. My Takeaway Triple from Day 1:  Some of the cool stuff that was suggested in the GUI Testing talk, we are already doing. I should write about that! Testing is not dead! Perhaps testing will become more of a skill than a specific role, but it is certainly not dead. Team/Developer exploratory testing… seems like a no-brainer assuming you have a team who is willing.  Day 2 – Coming soon…

    Read the article

  • Ubuntu 13.10 unity won't load from this morning

    - by user287957
    I turned on my pc this morning and unity will not load at all. I have tried loading it manually using ctrl+alt+f1 and all i got from it was the following:- compiz (core) - Info: Loading plugin: core compiz (core) - Info: Starting plugin: core compiz (core) - Info: Loading plugin: ccp compiz (core) - Info: Starting plugin: ccp compizconfig - Info: Backend : gsettings compizconfig - Info: Integration : true compizconfig - Info: Profile : unity compiz (core) - Info: Loading plugin: composite compiz (core) - Info: Starting plugin: composite compiz (core) - Info: Loading plugin: opengl compiz (core) - Info: Starting plugin: opengl libGL error: dlopen /usr/lib/x86_64-linux-gnu/dri/r600_dri.so failed (/usr/lib/x86_64- linux-gnu/dri/r600_dri.so: undefined symbol: _glapi_tls_Dispatch) libGL error: dlopen ${ORIGIN}/dri/r600_dri.so failed (${ORIGIN}/dri/r600_dri.so: cannot open shared object file: No such file or directory) libGL error: dlopen /usr/lib/dri/r600_dri.so failed (/usr/lib/dri/r600_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: r600_dri.so libGL error: driver pointer missing libGL error: failed to load driver: r600 libGL error: dlopen /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so failed (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so: undefined symbol: _glapi_tls_Dispatch) libGL error: dlopen ${ORIGIN}/dri/swrast_dri.so failed (${ORIGIN}/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: dlopen /usr/lib/dri/swrast_dri.so failed (/usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: swrast_dri.so libGL error: failed to load driver: swrast compiz (core) - Info: Loading plugin: compiztoolbox compiz (core) - Info: Starting plugin: compiztoolbox compiz (core) - Info: Loading plugin: decor compiz (core) - Info: Starting plugin: decor compiz (core) - Info: Loading plugin: copytex compiz (core) - Info: Starting plugin: copytex compiz (core) - Info: Loading plugin: snap compiz (core) - Info: Starting plugin: snap compiz (core) - Info: Loading plugin: resize compiz (core) - Info: Starting plugin: resize compiz (core) - Info: Loading plugin: gnomecompat compiz (core) - Info: Starting plugin: gnomecompat compiz (core) - Info: Loading plugin: move compiz (core) - Info: Starting plugin: move compiz (core) - Info: Loading plugin: place compiz (core) - Info: Starting plugin: place compiz (core) - Info: Loading plugin: mousepoll compiz (core) - Info: Starting plugin: mousepoll compiz (core) - Info: Loading plugin: regex compiz (core) - Info: Starting plugin: regex compiz (core) - Info: Loading plugin: imgpng compiz (core) - Info: Starting plugin: imgpng compiz (core) - Info: Loading plugin: vpswitch compiz (core) - Info: Starting plugin: vpswitch compiz (core) - Info: Loading plugin: grid compiz (core) - Info: Starting plugin: grid compiz (core) - Info: Loading plugin: animation compiz (core) - Info: Starting plugin: animation compiz (core) - Info: Loading plugin: expo compiz (core) - Info: Starting plugin: expo compiz (core) - Info: Loading plugin: session compiz (core) - Info: Starting plugin: session compiz (core) - Info: Loading plugin: wall compiz (core) - Info: Starting plugin: wall compiz (core) - Info: Loading plugin: fade compiz (core) - Info: Starting plugin: fade compiz (core) - Info: Loading plugin: unitymtgrabhandles compiz (core) - Info: Starting plugin: unitymtgrabhandles compiz (core) - Info: Loading plugin: ezoom compiz (core) - Info: Starting plugin: ezoom compiz (core) - Info: Loading plugin: workarounds compiz (core) - Info: Starting plugin: workarounds compiz (core) - Info: Loading plugin: scale compiz (core) - Info: Starting plugin: scale compiz (core) - Info: Loading plugin: unityshell compiz (core) - Info: Starting plugin: unityshell WARN 2014-06-03 10:55:31 unity.glib.dbus.server GLibDBusServer.cpp:586 Can't register object 'com.canonical.Autopilot.Introspection' yet as we don't have a connection, waiting for it... WARN 2014-06-03 10:55:31 unity.glib.dbus.server GLibDBusServer.cpp:586 Can't register object 'com.canonical.Unity.Debug.Logging' yet as we don't have a connection, waiting for it... compiz (unityshell) - Error: GL_ARB_vertex_buffer_object not supported compiz (core) - Error: Plugin initScreen failed: unityshell compiz (core) - Error: Failed to start plugin: unityshell compiz (core) - Info: Unloading plugin: unityshell X Error of failed request: BadWindow (invalid Window parameter) Major opcode of failed request: 18 (X_ChangeProperty) Resource id in failed request: 0x4000006 Serial number of failed request: 9909 Current serial number in output stream: 9913 It was all working fine yesterday but this morning there was nothing. Please help Many Thanks

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • AS11 Oracle B2B Sync Support - Series 1

    - by sinkarbabu.kirubanithi
    Synchronous message support has been enabled in Oracle B2B 11G. This would help customers to send the business message and receive the corresponding business response synchronously. We would like to keep this blog entry as three part series, first one would carry Oracle B2B configuration related details followed by 'how it can be consumed and utilized in an enterprise' using composites backed model. And, the last one would talk about more sophisticated seeded support built on Oracle B2B platform (Note: the last one is still in description phase and ETA hasn't been finalized yet). Details: In an effort to enable synchronous processing in Oracle B2B, we provided a platform using the existing 'callout' mechanism. In this case, we expect the 'callout' attached to the agreement to deliver incoming business message (inbound) to back-end application and get the corresponding business response from back-end and deliver it to Oracle B2B as its output. The output of 'callout' would be processed as outbound message and the same will be attached as a response for the inbound message. Requirements to enable Sync Support: Outbound side: Outbound Agreement - to send business message request Inbound Agreement - to receive business message response Inbound side: Inbound Agreement - to receive business message request Outbound Agreement - to send business message response Agreement Level Callout - to deliver the inbound request to back-end and get the corresponding business response This feature is supported only for HTTP based transport to exchange messages with Trading Partners. One may initiate the outbound message (enqueue) using any of the available Transports in Oracle B2B. Configuration: Outbound side: Please add "syncresponse=true" as "Additional Transport Header" parameter for remote Trading Partner's HTTP delivery channel configuration. This would enable Oracle B2B to process the HTTP response as inbound message and deliver the same to back-end application. All other configuration related to Agreement and Document setup remain same. Inbound side: There is no change in Agreement and Document setup. To enable "Sync Support", you need to build a 'callout' that takes the responsibility of delivering inbound message to back-end and get the corresponding business response from the back-end and attach the same as its output. Oracle B2B treats the output of 'callout' as outbound message and deliver it to Trading Partner as synchronous HTTP response. The requests that needs to processed synchronously should be received by "syncreceiver" (http://:/b2b/syncreceiver) endpoint in Oracle B2B. Exception Handling: Existing Oracle B2B exception handling applies to this use case as well. Here's the sample callout, SampleSyncCallout.java We will get you second part that talks about 'SOA composites' backed model to design the "Sync Support" use case from back-end to Trading Partners, stay tuned.

    Read the article

  • Using Rich Text Editor (WYSIWYG) in ASP.NET MVC

    - by imran_ku07
       Introduction:          In ASP.NET MVC forum I found some question regarding a sample HTML Rich Text Box Editor(also known as wysiwyg).So i decided to create a sample ASP.NET MVC web application which will use a Rich Text Box Editor. There are are lot of Html Editors are available, but for creating a sample application, i decided to use cross-browser WYSIWYG editor from openwebware. In this article I will discuss what changes needed to work this editor with ASP.NET MVC. Also I had attached the sample application for download at http://www.speedfile.org/155076. Also note that I will only show the important features, not discuss every feature in detail.   Description:          So Let's start create a sample ASP.NET MVC application. You need to add the following script files,         jquery-1.3.2.min.js        jquery_form.js        wysiwyg.js        wysiwyg-settings.js        wysiwyg-popup.js          Just put these files inside Scripts folder. Also put wysiwyg.css in your Content Folder and add the following folders in your project        addons        popups          Also create a empty folder Uploads to store the uploaded images. Next open wysiwyg.js and set your configuration                  // Images Directory        this.ImagesDir = "/addons/imagelibrary/images/";                // Popups Directory        this.PopupsDir = "/popups/";                // CSS Directory File        this.CSSFile = "/Content/wysiwyg.css";              Next create a simple View TextEditor.aspx inside View / Home Folder and add the folllowing HTML.        <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage" %>            <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">        <html >            <head runat="server">                <title>TextEditor</title>                <script src="../../Scripts/wysiwyg.js" type="text/javascript"></script>                <script src="../../Scripts/wysiwyg-settings.js" type="text/javascript"></script>                <script type="text/javascript">                            WYSIWYG.attach('text', full);                            </script>            </head>            <body>                <% using (Html.BeginForm()){ %>                    <textarea id="text" name="test2" style="width:850px;height:200px;">                    </textarea>                    <input type="submit" value="submit" />                <%} %>            </body>        </html>                  Here i have just added a text area control and a submit button inside a form. Note the id of text area and WYSIWYG.attach function's first parameter is same and next to watch is the HomeController.cs        using System;        using System.Collections.Generic;        using System.Linq;        using System.Web;        using System.Web.Mvc;        using System.IO;        namespace HtmlTextEditor.Controllers        {            [HandleError]            public class HomeController : Controller            {                public ActionResult Index()                {                    ViewData["Message"] = "Welcome to ASP.NET MVC!";                    return View();                }                    public ActionResult About()                {                                return View();                }                        public ActionResult TextEditor()                {                    return View();                }                [AcceptVerbs(HttpVerbs.Post)]                [ValidateInput(false)]                public ActionResult TextEditor(string test2)                {                    Session["html"] = test2;                            return RedirectToAction("Index");                }                        public ActionResult UploadImage()                {                    if (Request.Files[0].FileName != "")                    {                        Request.Files[0].SaveAs(Server.MapPath("~/Uploads/" + Path.GetFileName(Request.Files[0].FileName)));                        return Content(Url.Content("~/Uploads/" + Path.GetFileName(Request.Files[0].FileName)));                    }                    return Content("a");                }            }        }          So simple code, just save the posted Html into Session. Here the parameter of TextArea action is test2 which is same as textarea control name of TextArea.aspx View. Also note ValidateInputAttribute is false, so it's up to you to defends against XSS. Also there is an Action method which simply saves the file inside Upload Folder.          I am uploading the file using Jquery Form Plugin. Here is the code which is found in insert_image.html inside addons folder,        function ChangeImage() {            var myform=document.getElementById("formUpload");                    $(myform).ajaxSubmit({success: function(responseText){                insertImage(responseText);                        window.close();                }            });        }          and here is the Index View which simply renders the html of Editor which was saved in Session        <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %>        <asp:Content ID="indexTitle" ContentPlaceHolderID="TitleContent" runat="server">            Home Page        </asp:Content>        <asp:Content ID="indexContent" ContentPlaceHolderID="MainContent" runat="server">            <h2><%= Html.Encode(ViewData["Message"]) %></h2>            <p>                To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>.            </p>            <%if (Session["html"] != null){                  Response.Write(Session["html"].ToString());            } %>                    </asp:Content>   Summary:          Hopefully you will enjoy this article. Just download the code and see the effect. From security point, you must handle the XSS attack your self. I had uploaded the sample application in http://www.speedfile.org/155076

    Read the article

  • Debugging OWB generated SAP ABAP code executed through RFC

    - by Anil Menon
    Within OWB if you need to execute ABAP code using RFC you will have to use the SAP Function Module RFC_ABAP_INSTALL_AND_RUN. This function module is specified during the creation of the SAP source location. Usually in a Production environment a copy of this function module is used due to security restrictions. When you execute the mapping by using this Function Module you can’t see the actual ABAP code that is passed on to the SAP system. In case you want to take a look at the code that will be executed on the SAP system you need to use a custom Function Module in SAP. The easiest way to do this is to make a copy of the Function Module RFC_ABAP_INSTALL_AND_RUN and call it say Z_TEST_FM. Then edit the code of the Function Module in SAP as below FUNCTION Z_TEST_FM . DATA: BEGIN OF listobj OCCURS 20. INCLUDE STRUCTURE abaplist. DATA: END OF listobj. DATA: begin_of_line(72). DATA: line_end_char(1). DATA: line_length type I. DATA: lin(72). loop at program. append program-line to WRITES. endloop. ENDFUNCTION. Within OWB edit the SAP Location and use Z_TEST_FM as the “Execution Function Module” instead of  RFC_ABAP_INSTALL_AND_RUN. Then register this location. The Mapping you want to debug will have to be deployed. After deployment you can right click the mapping and click on “Start”.   After clicking start the “Input Parameters” screen will be displayed. You can make changes here if you need to. Check that the parameter BACKGROUND is set to “TRUE”. After Clicking “OK” the log for the execution will be displayed. The execution of Mappings will always fail when you use the above function module. Clicking on the icon “I” (information) the ABAP code will be displayed.   The ABAP code displayed is the code that is passed through the Function Module. You can also find the code by going through the log files on the server which hosts the OWB repository. The logs will be located under <OWB_HOME>/owb/log. Patch #12951045 is recommended while using the SAP Connector with OWB 11.2.0.2. For recommended patches for other releases please check with Oracle Support at http://support.oracle.com

    Read the article

  • Syncing Data with a Server using Silverlight and HTTP Polling Duplex

    - by dwahlin
    Many applications have the need to stay in-sync with data provided by a service. Although web applications typically rely on standard polling techniques to check if data has changed, Silverlight provides several interesting options for keeping an application in-sync that rely on server “push” technologies. A few years back I wrote several blog posts covering different “push” technologies available in Silverlight that rely on sockets or HTTP Polling Duplex. We recently had a project that looked like it could benefit from pushing data from a server to one or more clients so I thought I’d revisit the subject and provide some updates to the original code posted. If you’ve worked with AJAX before in Web applications then you know that until browsers fully support web sockets or other duplex (bi-directional communication) technologies that it’s difficult to keep applications in-sync with a server without relying on polling. The problem with polling is that you have to check for changes on the server on a timed-basis which can often be wasteful and take up unnecessary resources. With server “push” technologies, data can be pushed from the server to the client as it changes. Once the data is received, the client can update the user interface as appropriate. Using “push” technologies allows the client to listen for changes from the data but stay 100% focused on client activities as opposed to worrying about polling and asking the server if anything has changed. Silverlight provides several options for pushing data from a server to a client including sockets, TCP bindings and HTTP Polling Duplex.  Each has its own strengths and weaknesses as far as performance and setup work with HTTP Polling Duplex arguably being the easiest to setup and get going.  In this article I’ll demonstrate how HTTP Polling Duplex can be used in Silverlight 4 applications to push data and show how you can create a WCF server that provides an HTTP Polling Duplex binding that a Silverlight client can consume.   What is HTTP Polling Duplex? Technologies that allow data to be pushed from a server to a client rely on duplex functionality. Duplex (or bi-directional) communication allows data to be passed in both directions.  A client can call a service and the server can call the client. HTTP Polling Duplex (as its name implies) allows a server to communicate with a client without forcing the client to constantly poll the server. It has the benefit of being able to run on port 80 making setup a breeze compared to the other options which require specific ports to be used and cross-domain policy files to be exposed on port 943 (as with sockets and TCP bindings). Having said that, if you’re looking for the best speed possible then sockets and TCP bindings are the way to go. But, they’re not the only game in town when it comes to duplex communication. The first time I heard about HTTP Polling Duplex (initially available in Silverlight 2) I wasn’t exactly sure how it was any better than standard polling used in AJAX applications. I read the Silverlight SDK, looked at various resources and generally found the following definition unhelpful as far as understanding the actual benefits that HTTP Polling Duplex provided: "The Silverlight client periodically polls the service on the network layer, and checks for any new messages that the service wants to send on the callback channel. The service queues all messages sent on the client callback channel and delivers them to the client when the client polls the service." Although the previous definition explained the overall process, it sounded as if standard polling was used. Fortunately, Microsoft’s Scott Guthrie provided me with a more clear definition several years back that explains the benefits provided by HTTP Polling Duplex quite well (used with his permission): "The [HTTP Polling Duplex] duplex support does use polling in the background to implement notifications – although the way it does it is different than manual polling. It initiates a network request, and then the request is effectively “put to sleep” waiting for the server to respond (it doesn’t come back immediately). The server then keeps the connection open but not active until it has something to send back (or the connection times out after 90 seconds – at which point the duplex client will connect again and wait). This way you are avoiding hitting the server repeatedly – but still get an immediate response when there is data to send." After hearing Scott’s definition the light bulb went on and it all made sense. A client makes a request to a server to check for changes, but instead of the request returning immediately, it parks itself on the server and waits for data. It’s kind of like waiting to pick up a pizza at the store. Instead of calling the store over and over to check the status, you sit in the store and wait until the pizza (the request data) is ready. Once it’s ready you take it back home (to the client). This technique provides a lot of efficiency gains over standard polling techniques even though it does use some polling of its own as a request is initially made from a client to a server. So how do you implement HTTP Polling Duplex in your Silverlight applications? Let’s take a look at the process by starting with the server. Creating an HTTP Polling Duplex WCF Service Creating a WCF service that exposes an HTTP Polling Duplex binding is straightforward as far as coding goes. Add some one way operations into an interface, create a client callback interface and you’re ready to go. The most challenging part comes into play when configuring the service to properly support the necessary binding and that’s more of a cut and paste operation once you know the configuration code to use. To create an HTTP Polling Duplex service you’ll need to expose server-side and client-side interfaces and reference the System.ServiceModel.PollingDuplex assembly (located at C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\Libraries\Server on my machine) in the server project. For the demo application I upgraded a basketball simulation service to support the latest polling duplex assemblies. The service simulates a simple basketball game using a Game class and pushes information about the game such as score, fouls, shots and more to the client as the game changes over time. Before jumping too far into the game push service, it’s important to discuss two interfaces used by the service to communicate in a bi-directional manner. The first is called IGameStreamService and defines the methods/operations that the client can call on the server (see Listing 1). The second is IGameStreamClient which defines the callback methods that a server can use to communicate with a client (see Listing 2).   [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(IGameStreamClient))] public interface IGameStreamService { [OperationContract(IsOneWay = true)] void GetTeamData(); } Listing 1. The IGameStreamService interface defines server operations that can be called on the server.   [ServiceContract] public interface IGameStreamClient { [OperationContract(IsOneWay = true)] void ReceiveTeamData(List<Team> teamData); [OperationContract(IsOneWay = true, AsyncPattern=true)] IAsyncResult BeginReceiveGameData(GameData gameData, AsyncCallback callback, object state); void EndReceiveGameData(IAsyncResult result); } Listing 2. The IGameStreamClient interfaces defines client operations that a server can call.   The IGameStreamService interface is decorated with the standard ServiceContract attribute but also contains a value for the CallbackContract property.  This property is used to define the interface that the client will expose (IGameStreamClient in this example) and use to receive data pushed from the service. Notice that each OperationContract attribute in both interfaces sets the IsOneWay property to true. This means that the operation can be called and passed data as appropriate, however, no data will be passed back. Instead, data will be pushed back to the client as it’s available.  Looking through the IGameStreamService interface you can see that the client can request team data whereas the IGameStreamClient interface allows team and game data to be received by the client. One interesting point about the IGameStreamClient interface is the inclusion of the AsyncPattern property on the BeginReceiveGameData operation. I initially created this operation as a standard one way operation and it worked most of the time. However, as I disconnected clients and reconnected new ones game data wasn’t being passed properly. After researching the problem more I realized that because the service could take up to 7 seconds to return game data, things were getting hung up. By setting the AsyncPattern property to true on the BeginReceivedGameData operation and providing a corresponding EndReceiveGameData operation I was able to get around this problem and get everything running properly. I’ll provide more details on the implementation of these two methods later in this post. Once the interfaces were created I moved on to the game service class. The first order of business was to create a class that implemented the IGameStreamService interface. Since the service can be used by multiple clients wanting game data I added the ServiceBehavior attribute to the class definition so that I could set its InstanceContextMode to InstanceContextMode.Single (in effect creating a Singleton service object). Listing 3 shows the game service class as well as its fields and constructor.   [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.Single)] public class GameStreamService : IGameStreamService { object _Key = new object(); Game _Game = null; Timer _Timer = null; Random _Random = null; Dictionary<string, IGameStreamClient> _ClientCallbacks = new Dictionary<string, IGameStreamClient>(); static AsyncCallback _ReceiveGameDataCompleted = new AsyncCallback(ReceiveGameDataCompleted); public GameStreamService() { _Game = new Game(); _Timer = new Timer { Enabled = false, Interval = 2000, AutoReset = true }; _Timer.Elapsed += new ElapsedEventHandler(_Timer_Elapsed); _Timer.Start(); _Random = new Random(); }} Listing 3. The GameStreamService implements the IGameStreamService interface which defines a callback contract that allows the service class to push data back to the client. By implementing the IGameStreamService interface, GameStreamService must supply a GetTeamData() method which is responsible for supplying information about the teams that are playing as well as individual players.  GetTeamData() also acts as a client subscription method that tracks clients wanting to receive game data.  Listing 4 shows the GetTeamData() method. public void GetTeamData() { //Get client callback channel var context = OperationContext.Current; var sessionID = context.SessionId; var currClient = context.GetCallbackChannel<IGameStreamClient>(); context.Channel.Faulted += Disconnect; context.Channel.Closed += Disconnect; IGameStreamClient client; if (!_ClientCallbacks.TryGetValue(sessionID, out client)) { lock (_Key) { _ClientCallbacks[sessionID] = currClient; } } currClient.ReceiveTeamData(_Game.GetTeamData()); //Start timer which when fired sends updated score information to client if (!_Timer.Enabled) { _Timer.Enabled = true; } } Listing 4. The GetTeamData() method subscribes a given client to the game service and returns. The key the line of code in the GetTeamData() method is the call to GetCallbackChannel<IGameStreamClient>().  This method is responsible for accessing the calling client’s callback channel. The callback channel is defined by the IGameStreamClient interface shown earlier in Listing 2 and used by the server to communicate with the client. Before passing team data back to the client, GetTeamData() grabs the client’s session ID and checks if it already exists in the _ClientCallbacks dictionary object used to track clients wanting callbacks from the server. If the client doesn’t exist it adds it into the collection. It then pushes team data from the Game class back to the client by calling ReceiveTeamData().  Since the service simulates a basketball game, a timer is then started if it’s not already enabled which is then used to randomly send data to the client. When the timer fires, game data is pushed down to the client. Listing 5 shows the _Timer_Elapsed() method that is called when the timer fires as well as the SendGameData() method used to send data to the client. void _Timer_Elapsed(object sender, ElapsedEventArgs e) { int interval = _Random.Next(3000, 7000); lock (_Key) { _Timer.Interval = interval; _Timer.Enabled = false; } SendGameData(_Game.GetGameData()); } private void SendGameData(GameData gameData) { var cbs = _ClientCallbacks.Where(cb => ((IContextChannel)cb.Value).State == CommunicationState.Opened); for (int i = 0; i < cbs.Count(); i++) { var cb = cbs.ElementAt(i).Value; try { cb.BeginReceiveGameData(gameData, _ReceiveGameDataCompleted, cb); } catch (TimeoutException texp) { //Log timeout error } catch (CommunicationException cexp) { //Log communication error } } lock (_Key) _Timer.Enabled = true; } private static void ReceiveGameDataCompleted(IAsyncResult result) { try { ((IGameStreamClient)(result.AsyncState)).EndReceiveGameData(result); } catch (CommunicationException) { // empty } catch (TimeoutException) { // empty } } LIsting 5. _Timer_Elapsed is used to simulate time in a basketball game. When _Timer_Elapsed() fires the SendGameData() method is called which iterates through the clients wanting to be notified of changes. As each client is identified, their respective BeginReceiveGameData() method is called which ultimately pushes game data down to the client. Recall that this method was defined in the client callback interface named IGameStreamClient shown earlier in Listing 2. Notice that BeginReceiveGameData() accepts _ReceiveGameDataCompleted as its second parameter (an AsyncCallback delegate defined in the service class) and passes the client callback as the third parameter. The initial version of the sample application had a standard ReceiveGameData() method in the client callback interface. However, sometimes the client callbacks would work properly and sometimes they wouldn’t which was a little baffling at first glance. After some investigation I realized that I needed to implement an asynchronous pattern for client callbacks to work properly since 3 – 7 second delays are occurring as a result of the timer. Once I added the BeginReceiveGameData() and ReceiveGameDataCompleted() methods everything worked properly since each call was handled in an asynchronous manner. The final task that had to be completed to get the server working properly with HTTP Polling Duplex was adding configuration code into web.config. In the interest of brevity I won’t post all of the code here since the sample application includes everything you need. However, Listing 6 shows the key configuration code to handle creating a custom binding named pollingDuplexBinding and associate it with the service’s endpoint.   <bindings> <customBinding> <binding name="pollingDuplexBinding"> <binaryMessageEncoding /> <pollingDuplex maxPendingSessions="2147483647" maxPendingMessagesPerSession="2147483647" inactivityTimeout="02:00:00" serverPollTimeout="00:05:00"/> <httpTransport /> </binding> </customBinding> </bindings> <services> <service name="GameService.GameStreamService" behaviorConfiguration="GameStreamServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="pollingDuplexBinding" contract="GameService.IGameStreamService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>   Listing 6. Configuring an HTTP Polling Duplex binding in web.config and associating an endpoint with it. Calling the Service and Receiving “Pushed” Data Calling the service and handling data that is pushed from the server is a simple and straightforward process in Silverlight. Since the service is configured with a MEX endpoint and exposes a WSDL file, you can right-click on the Silverlight project and select the standard Add Service Reference item. After the web service proxy is created you may notice that the ServiceReferences.ClientConfig file only contains an empty configuration element instead of the normal configuration elements created when creating a standard WCF proxy. You can certainly update the file if you want to read from it at runtime but for the sample application I fed the service URI directly to the service proxy as shown next: var address = new EndpointAddress("http://localhost.:5661/GameStreamService.svc"); var binding = new PollingDuplexHttpBinding(); _Proxy = new GameStreamServiceClient(binding, address); _Proxy.ReceiveTeamDataReceived += _Proxy_ReceiveTeamDataReceived; _Proxy.ReceiveGameDataReceived += _Proxy_ReceiveGameDataReceived; _Proxy.GetTeamDataAsync(); This code creates the proxy and passes the endpoint address and binding to use to its constructor. It then wires the different receive events to callback methods and calls GetTeamDataAsync().  Calling GetTeamDataAsync() causes the server to store the client in the server-side dictionary collection mentioned earlier so that it can receive data that is pushed.  As the server-side timer fires and game data is pushed to the client, the user interface is updated as shown in Listing 7. Listing 8 shows the _Proxy_ReceiveGameDataReceived() method responsible for handling the data and calling UpdateGameData() to process it.   Listing 7. The Silverlight interface. Game data is pushed from the server to the client using HTTP Polling Duplex. void _Proxy_ReceiveGameDataReceived(object sender, ReceiveGameDataReceivedEventArgs e) { UpdateGameData(e.gameData); } private void UpdateGameData(GameData gameData) { //Update Score this.tbTeam1Score.Text = gameData.Team1Score.ToString(); this.tbTeam2Score.Text = gameData.Team2Score.ToString(); //Update ball visibility if (gameData.Action != ActionsEnum.Foul) { if (tbTeam1.Text == gameData.TeamOnOffense) { AnimateBall(this.BB1, this.BB2); } else //Team 2 { AnimateBall(this.BB2, this.BB1); } } if (this.lbActions.Items.Count > 9) this.lbActions.Items.Clear(); this.lbActions.Items.Add(gameData.LastAction); if (this.lbActions.Visibility == Visibility.Collapsed) this.lbActions.Visibility = Visibility.Visible; } private void AnimateBall(Image onBall, Image offBall) { this.FadeIn.Stop(); Storyboard.SetTarget(this.FadeInAnimation, onBall); Storyboard.SetTarget(this.FadeOutAnimation, offBall); this.FadeIn.Begin(); } Listing 8. As the server pushes game data, the client’s _Proxy_ReceiveGameDataReceived() method is called to process the data. In a real-life application I’d go with a ViewModel class to handle retrieving team data, setup data bindings and handle data that is pushed from the server. However, for the sample application I wanted to focus on HTTP Polling Duplex and keep things as simple as possible.   Summary Silverlight supports three options when duplex communication is required in an application including TCP bindins, sockets and HTTP Polling Duplex. In this post you’ve seen how HTTP Polling Duplex interfaces can be created and implemented on the server as well as how they can be consumed by a Silverlight client. HTTP Polling Duplex provides a nice way to “push” data from a server while still allowing the data to flow over port 80 or another port of your choice.   Sample Application Download

    Read the article

  • RIDC Accelerator for Portal

    - by Stefan Krantz
    What is RIDC?Remote IntraDoc Client is a Java enabled API that leverages simple transportation protocols like Socket, HTTP and JAX/WS to execute content service operations in WebCenter Content Server. Each operation by design in the Content Server will execute stateless and return a complete result of the request. Each request object simply specifies the in a Map format (key and value pairs) what service to call and what parameters settings to apply. The result responded with will be built on the same Map format (key and value pairs). The possibilities with RIDC is endless since you can consume any available service (even custom made ones), RIDC can be executed from any Java SE application that has any WebCenter Content Services needs. WebCenter Portal and the example Accelerator RIDC adapter frameworkWebCenter Portal currently integrates and leverages WebCenter Content Services to enable available use cases in the portal today, like Content Presenter and Doc Lib. However the current use cases only covers few of the scenarios that the Content Server has to offer, in addition to the existing use cases it is not rare that the customer requirements requires additional steps and functionality that is provided by WebCenter Content but not part of the use cases from the WebCenter Portal.The good news to this is RIDC, the second good news is that WebCenter Portal already leverages the RIDC and has a connection management framework in place. The million dollar question here is how can I leverage this infrastructure for my custom use cases. Oracle A-Team has during its interactions produced a accelerator adapter framework that will reuse and leverage the existing connections provisioned in the webcenter portal application (works for WebCenter Spaces as well), as well as a very comprehensive design patter to minimize the work involved when exposing functionality. Let me introduce the RIDCCommon framework for accelerating WebCenter Content consumption from WebCenter Portal including Spaces. How do I get started?Through a few easy steps you will be on your way, Extract the zip file RIDCCommon.zip to the WebCenter Portal Application file structure (PortalApp) Open you Portal Application in JDeveloper (PS4/PS5) select to open the project in your application - this will add the project as a member of the application Update the Portal project dependencies to include the new RIDCCommon project Make sure that you WebCenter Content Server connection is marked as primary (a checkbox at the top of the connection properties form) You should by this stage have a similar structure in your JDeveloper Application Project Portal Project PortalWebAssets Project RIDCCommon Since the API is coming with some example operations that has already been exposed as DataControl actions, if you open Data Controls accordion you should see following: How do I implement my own operation? Create a new Java Class in for example com.oracle.ateam.portal.ridc.operation call it (GetDocInfoOperation) Extend the abstract class com.oracle.ateam.portal.ridc.operation.RIDCAbstractOperation and implement the interface com.oracle.ateam.portal.ridc.operation.IRIDCOperation The only method you actually are required to implement is execute(RIDCManager, IdcClient, IdcContext) The best practice to set object references for the operation is through the Constructor, example below public GetDocInfoOperation(String dDocName)By leveraging the constructor you can easily force the implementing class to pass right information, you can also overload the Constructor with more or less parameters as required Implement the execute method, the work you supposed to execute here is creating a new request binder and retrieve a response binder with the information in the request binder.In this case the dDocName for which we want the DocInfo Secondly you have to process the response binder by extracting the information you need from the request and restore this information in a simple POJO Java BeanIn the example below we do this in private void processResult(DataBinder responseData) - the new SearchDataObject is a Member of the GetDocInfoOperation so we can return this from a access method. Since the RIDCCommon API leverage template pattern for the operations you are now required to add a method that will enable access to the result after the execution of the operationIn the example below we added the method public SearchDataObject getDataObject() - this method returns the pre processed SearchDataObject from the execute method  This is it, as you can see on the code below you do not need more than 32 lines of very simple code 1: public class GetDocInfoOperation extends RIDCAbstractOperation implements IRIDCOperation { 2: private static final String DOC_INFO_BY_NAME = "DOC_INFO_BY_NAME"; 3: private String dDocName = null; 4: private SearchDataObject sdo = null; 5: 6: public GetDocInfoOperation(String dDocName) { 7: super(); 8: this.dDocName = dDocName; 9: } 10:   11: public boolean execute(RIDCManager manager, IdcClient client, 12: IdcContext userContext) throws Exception { 13: DataBinder dataBinder = createNewRequestBinder(DOC_INFO_BY_NAME); 14: dataBinder.putLocal(DocumentAttributeDef.NAME.getName(), dDocName); 15: 16: DataBinder responseData = getResponseBinder(dataBinder); 17: processResult(responseData); 18: return true; 19: } 20: 21: private void processResult(DataBinder responseData) { 22: DataResultSet rs = responseData.getResultSet("DOC_INFO"); 23: for(DataObject dobj : rs.getRows()) { 24: this.sdo = new SearchDataObject(dobj); 25: } 26: super.setMessage(responseData.getLocal(ATTR_MESSAGE)); 27: } 28: 29: public SearchDataObject getDataObject() { 30: return this.sdo; 31: } 32: } How do I execute my operation? In the previous section we described how to create a operation, so by now you should be ready to execute the operation Step one either add a method to the class  com.oracle.ateam.portal.datacontrol.ContentServicesDC or a class of your own choiceRemember the RIDCManager is a very light object and can be created where needed Create a method signature look like this public SearchDataObject getDocInfo(String dDocName) throws Exception In the method body - create a new instance of GetDocInfoOperation and meet the constructor requirements by passing the dDocNameGetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName) Execute the operation via the RIDCManager instance rMgr.executeOperation(docInfo) Return the result by accessing it from the executed operationreturn docInfo.getDataObject() 1: private RIDCManager rMgr = null; 2: private String lastOperationMessage = null; 3:   4: public ContentServicesDC() { 5: super(); 6: this.rMgr = new RIDCManager(); 7: } 8: .... 9: public SearchDataObject getDocInfo(String dDocName) throws Exception { 10: GetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName); 11: boolean boolVal = rMgr.executeOperation(docInfo); 12: lastOperationMessage = docInfo.getMessage(); 13: return docInfo.getDataObject(); 14: }   Get the binaries! The enclosed code in a example that can be used as a reference on how to consume and leverage similar use cases, user has to guarantee appropriate quality and support.  Download link: https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/RIDCCommon.zip RIDC API Referencehttp://docs.oracle.com/cd/E23943_01/apirefs.1111/e17274/toc.htm

    Read the article

  • 10 PowerShell One Liners

    - by BizTalk Visionary
    Here are a few one-liners that use NetCmdlets. Some of these I've blogged about before, some are new. Let me know if you have questions, which ones you find useful, or how you altered these to suit your own needs. Send email to a list of recipient addresses: import-csv users.csv | % { send-email -to $_.email -from [email protected] -subject "Important Email" –message "Hello World!" -server 10.0.1.1 } Show the access control list for a specific Exchange folder: get-imap -server $mymailserver -cred $mycred -folder INBOX.RESUMES –acl Add look and read permissions on an Exchange folder, for a list of accounts pulled from a CSV file: import-csv users.csv | % { set-imap -server -acluser $_.username $mymailserver -cred $mycred -folder INBOX.RESUMES –acl “lr”  } Sync system time with an Internet time server: get-time -server clock.psu.edu –set To remotely sync the time on a set of computers: import-csv computers.csv | % { Invoke-Command -computerName $_.computer -cred $mycred -scriptblock { get-time -server clock.psu.edu –set } } Delete all emails from an Exchange folder that match a certain criteria.  For example, delete all emails from [email protected]: get-imap -server $mailserver –cred $mycred | ? {$_.FromEmail -eq [email protected]} | %{ set-imap -server $mailserver –cred $mycred-message $_.Id -delete } Update Twitter status from PowerShell: get-http –url "http://twitter.com/statuses/update.xml" –cred $mycred -variablename status -variablevalue "Tweeting with NetCmdlets!" A test-path that works over FTP, FTPS (SSL), and SFTP (SSH) connections: get-ftp -server $remoteserver –cred $mycred -path /remote/path/to/checkfor* Don't forget the *.  Also, to use SSL or SSH just add an –ssl or –ssh parameter. List disabled user accounts in Active Directory (or any other LDAP server): get-ldap -server $ad -cred $mycred -dn dc=yourdc -searchscope wholesubtree     -search "(&(objectclass=user)(objectclass=person)(company=*)(userAccountControl:1.2.840.113556.1.4.803:=2))" List Active Directory groups and their members: get-ldap -server testman -cred $mycred -dn dc=NS2 -searchscope wholesubtree -search "(&(objectclass=group)(cn=*admin*))" | select ResultDN, member Display the last initialization time (e.g. last reboot time) of all discoverable SNMP agents on a network: import-csv computers.csv | % { get-snmp -agent $_.computer -oid sysUpTime.0 | %{([datetime]::Now).AddSeconds(-($_.OIDValue/100))} } Not mentioned here:  data conversion (Yenc, QP, UUencoding, MD5, SHA1, base64, etc), DNS, News Groups (NNTP/UseNet), POP mail, RSS feeds, Amazon S3, Syslog, TFTP, TraceRoute, SNMP Traps, UDP, WebDAV, whois, Rexec/Rshell/Telnet, Zip files, sending IMs (Jabber/GoogleTalk/XMPP), sending text messages and pages, ping, and more. Original Source: Lance's Textbox

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >