Search Results

Search found 17610 results on 705 pages for 'specific'.

Page 628/705 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • Where should you put constants and why?

    - by Tim Meyer
    In our mostly large applications, we usually have a only few locations for constants: One class for GUI and internal contstants (Tab Page titles, Group Box titles, calculation factors, enumerations) One class for database tables and columns (this part is generated code) plus readable names for them (manually assigned) One class for application messages (logging, message boxes etc) The constants are usually separated into different structs in those classes. In our C++ applications, the constants are only defined in the .h file and the values are assigned in the .cpp file. One of the advantages is that all strings etc are in one central place and everybody knows where to find them when something must be changed. This is especially something project managers seem to like as people come and go and this way everybody can change such trivial things without having to dig into the application's structure. Also, you can easily change the title of similar Group Boxes / Tab Pages etc at once. Another aspect is that you can just print that class and give it to a non-programmer who can check if the captions are intuitive, and if messages to the user are too detailed or too confusing etc. However, I see certain disadvantages: Every single class is tightly coupled to the constants classes Adding/Removing/Renaming/Moving a constant requires recompilation of at least 90% of the application (Note: Changing the value doesn't, at least for C++). In one of our C++ projects with 1500 classes, this means around 7 minutes of compilation time (using precompiled headers; without them it's around 50 minutes) plus around 10 minutes of linking against certain static libraries. Building a speed optimized release through the Visual Studio Compiler takes up to 3 hours. I don't know if the huge amount of class relations is the source but it might as well be. You get driven into temporarily hard-coding strings straight into code because you want to test something very quickly and don't want to wait 15 minutes just for that test (and probably every subsequent one). Everybody knows what happens to the "I will fix that later"-thoughts. Reusing a class in another project isn't always that easy (mainly due to other tight couplings, but the constants handling doesn't make it easier.) Where would you store constants like that? Also what arguments would you bring in order to convince your project manager that there are better concepts which also comply with the advantages listed above? Feel free to give a C++-specific or independent answer. PS: I know this question is kind of subjective but I honestly don't know of any better place than this site for this kind of question. Update on this project I have news on the compile time thing: Following Caleb's and gbjbaanb's posts, I split my constants file into several other files when I had time. I also eventually split my project into several libraries which was now possible much easier. Compiling this in release mode showed that the auto-generated file which contains the database definitions (table, column names and more - more than 8000 symbols) and builds up certain hashes caused the huge compile times in release mode. Deactivating MSVC's optimizer for the library which contains the DB constants now allowed us to reduce the total compile time of your Project (several applications) in release mode from up to 8 hours to less than one hour! We have yet to find out why MSVC has such a hard time optimizing these files, but for now this change relieves a lot of pressure as we no longer have to rely on nightly builds only. That fact - and other benefits, such as less tight coupling, better reuseability etc - also showed that spending time splitting up the "constants" wasn't such a bad idea after all ;-)

    Read the article

  • Taking HRMS to the Cloud to Simplify Human Resources Management

    - by HCM-Oracle
    By Anke Mogannam With human capital management (HCM) a top-of-mind issue for executives in every industry, human resources (HR) organizations are poised to have their day in the sun—proving not just their administrative worth but their strategic value as well.  To make good on that promise, however, HR must modernize. Indeed, if HR is to act as an agent of change—providing the swift reallocation of employees  and the rapid absorption of employee data required for enterprises to shift course on a dime—it must first deal with the disruptive change at its own front door. And increasingly, that means choosing the right technology and human resources management system (HRMS) for managing the entire employee lifecycle. Unfortunately, for most organizations, this task has proved easier said than done. This is because while much has been written about advances in HRMS technology, until recently, most of those advances took the form of disparate on-premises solutions designed to serve very specific purposes. Although this may have resulted in key competencies in certain areas, it also meant that processes for core HR functions like payroll and benefits were being carried out in separate systems from those used for talent management, workforce optimization, training, and so on. With no integration—and no single system of record—processes were disconnected, ease of use was impeded, user experience was diminished, and vital data was left untapped.  Today, however, that scenario has begun to change, and end-to-end cloud-based HCM solutions have moved from wished-for innovations to real-life solutions. Why, then, have HR organizations been so slow in adopting them? The answer—it would seem—is, “It’s complicated.” So complicated, in fact, that 45 percent of the respondents to PwC’s “Annual HR Technology Survey” (for 2013) reported having no formal HR software roadmap, and 40 percent stated that they “did not know” whether their organizations would be increasing their use of cloud or software as a service (SaaS) for HR.  Clearly, HR organizations need help sorting through the morass of HR software options confronting them. But just as clearly, there’s an enormous opportunity awaiting those that do. The trick will come in charting a course that allows HR to leverage existing technology while investing in the cloud-based solutions that will deliver the end-to-end processes, easy-to-understand analytics, and superior adaptability required to simplify—and add value to—every aspect of employee management. The Opportunity therefore is to cut costs, drive Innovation, and increase engagement by moving to cloud-based HCM.  Then you will benefit from one Interface, leverage many access points, and  gain at-a-glance insight across your entire workforce. With many legacy on-premises HR systems not being efficient anymore and cloud-based, integrated systems that span the range of HR functions finally reaching maturity, the time is ripe for moving core HR to the cloud. Indeed, for the first time ever there are more HRMS replacement initiatives than HRMS upgrade initiatives under way, and the majority of them involve moving to the cloud per Cedar Crestone’s 2013-2014 HRMS survey. To learn how you can launch your own cloud HCM initiative and begin using HR to power the enterprise, visit Oracle HRMS in the Cloud and Oracle’s new customer 2 cloud program. Anke Mogannam brings more than 16 years of marketing and human capital management experience in the technology industries to her role at Oracle where she is part of the Human Capital Management applications marketing team. In that role, Anke drives content marketing, messaging, go-to-market activities, integrated marketing campaigns, and field enablement. Prior to joining Oracle, Anke held several roles in communications, marketing, HCM product strategy and product management at PeopleSoft, SAP, Workday and Saba. Follow her on Twitter @amogannam

    Read the article

  • Personal Financial Management – The need for resuscitation

    - by Salil Ravindran
    Until a year or so ago, PFM (Personal Financial Management) was the blue eyed boy of every channel banking head. In an age when bank account portability is still fiction, PFM was expected to incentivise customers to switch banks. It still is, in some emerging economies, but if the state of PFM in matured markets is anything to go by, it is in a state of coma and badly requires resuscitation. Studies conducted around the year show an alarming decline and stagnation in PFM usage in mature markets. A Sept 2012 report by Aite Group – Strategies for PFM Success shows that 72% of users hadn’t used PFM and worse, 58% of them were not kicked about using it. Of the rest who had used it, only half did on a bank site. While there are multiple reasons for this lack of adoption, some are glaringly obvious. While pretty graphs and pie charts are important to provide a visual representation of my income and expense, it is simply not enough to encourage me to return. Static representation of data without any insightful analysis does not help me. Budgeting and Cash Flow is important but when I have an operative account, a couple of savings accounts, a mortgage loan and a couple of credit cards help me with what my affordability is in specific contexts rather than telling me I just busted my budget. Help me with relative importance of each budget category so that I know it is fine to go over budget on books for my daughter as against going over budget on eating out. Budget over runs and spend analysis are post facto and I am informed of my sins only when I return to online banking. That too, only if I decide to come to the PFM area. Fundamentally, PFM should be a part of my banking engagement rather than an analysis tool. It should be contextual so that I can make insight based decisions. So what can be done to resuscitate PFM? Amalgamation with banking activities – In most cases, PFM tools are integrated into online banking pages and they are like chapter 37 of a long story. PFM needs to be a way of banking rather than a tool. Available balances should shift to Spendable Balances. Budget and goal related insights should be integrated with transaction sessions to drive pre-event financial decisions. Personal Financial Guidance - Banks need to think ground level and see if their PFM offering is really helping customers achieve self actualisation. Banks need to recognise that most customers out there are non-proficient about making the best value of their money. Customers return when they know that they are being guided rather than being just informed on their finance. Integrating contextual financial offers and financial planning into PFM is one way ahead. Yet another way is to help customers tag unwanted spending thereby encouraging sound savings habits. Mobile PFM – Most banks have left all those numbers on online banking. With access mostly having moved to devices and the success of apps, moving PFM on to devices will give it a much needed shot in the arm. This is not only about presenting the same wine in a new bottle but also about leveraging the power of the device in pushing real time notifications to make pre-purchase decisions. The pursuit should be to analyse spend, budgets and financial goals real time and push them pre-event on to the device. So next time, I should know that I have over run my eating out budget before walking into that burger joint and not after. Increase participation and collaboration – Peer group experiences and comments are valued above those offered by the bank. Integrating social media into PFM engagement will let customers share and solicit their financial management experiences with their peer group. Peer comparisons help benchmark one’s savings and spending habits with those of the peer group and increases stickiness. While mature markets have gone through this learning in some way over the last one year, banks in maturing digital banking economies increasingly seem to be falling into this trap. Best practices lie in profiling and segmenting customers, being where they are and contextually guiding them to identify and achieve their financial goals. Banks could look at the likes of Simple and Movenbank to draw inpiration from.

    Read the article

  • Learn Many Languages

    - by Jeff Foster
    My previous blog, Deliberate Practice, discussed the need for developers to “sharpen their pencil” continually, by setting aside time to learn how to tackle problems in different ways. However, the Sapir-Whorf hypothesis, a contested and somewhat-controversial concept from language theory, seems to hold reasonably true when applied to programming languages. It states that: “The structure of a language affects the ways in which its speakers conceptualize their world.” If you’re constrained by a single programming language, the one that dominates your day job, then you only have the tools of that language at your disposal to think about and solve a problem. For example, if you’ve only ever worked with Java, you would never think of passing a function to a method. A good developer needs to learn many languages. You may never deploy them in production, you may never ship code with them, but by learning a new language, you’ll have new ideas that will transfer to your current “day-job” language. With the abundant choices in programming languages, how does one choose which to learn? Alan Perlis sums it up best. “A language that doesn‘t affect the way you think about programming is not worth knowing“ With that in mind, here’s a selection of languages that I think are worth learning and that have certainly changed the way I think about tackling programming problems. Clojure Clojure is a Lisp-based language running on the Java Virtual Machine. The unique property of Lisp is homoiconicity, which means that a Lisp program is a Lisp data structure, and vice-versa. Since we can treat Lisp programs as Lisp data structures, we can write our code generation in the same style as our code. This gives Lisp a uniquely powerful macro system, and makes it ideal for implementing domain specific languages. Clojure also makes software transactional memory a first-class citizen, giving us a new approach to concurrency and dealing with the problems of shared state. Haskell Haskell is a strongly typed, functional programming language. Haskell’s type system is far richer than C# or Java, and allows us to push more of our application logic to compile-time safety. If it compiles, it usually works! Haskell is also a lazy language – we can work with infinite data structures. For example, in a board game we can generate the complete game tree, even if there are billions of possibilities, because the values are computed only as they are needed. Erlang Erlang is a functional language with a strong emphasis on reliability. Erlang’s approach to concurrency uses message passing instead of shared variables, with strong support from both the language itself and the virtual machine. Processes are extremely lightweight, and garbage collection doesn’t require all processes to be paused at the same time, making it feasible for a single program to use millions of processes at once, all without the mental overhead of managing shared state. The Benefits of Multilingualism By studying new languages, even if you won’t ever get the chance to use them in production, you will find yourself open to new ideas and ways of coding in your main language. For example, studying Haskell has taught me that you can do so much more with types and has changed my programming style in C#. A type represents some state a program should have, and a type should not be able to represent an invalid state. I often find myself refactoring methods like this… void SomeMethod(bool doThis, bool doThat) { if (!(doThis ^ doThat)) throw new ArgumentException(“At least one arg should be true”); if (doThis) DoThis(); if (doThat) DoThat(); } …into a type-based solution, like this: enum Action { DoThis, DoThat, Both }; void SomeMethod(Action action) { if (action == Action.DoThis || action == Action.Both) DoThis(); if (action == Action.DoThat || action == Action.Both) DoThat(); } At this point, I’ve removed the runtime exception in favor of a compile-time check. This is a trivial example, but is just one of many ideas that I’ve taken from one language and implemented in another.

    Read the article

  • Pro/con of using Angular directives for complex form validation/ GUI manipulation

    - by tengen
    I am building a new SPA front end to replace an existing enterprise's legacy hodgepodge of systems that are outdated and in need of updating. I am new to angular, and wanted to see if the community could give me some perspective. I'll state my problem, and then ask my question. I have to generate several series of check boxes based on data from a .js include, with data like this: $scope.fieldMappings.investmentObjectiveMap = [ {'id':"CAPITAL PRESERVATION", 'name':"Capital Preservation"}, {'id':"STABLE", 'name':"Moderate"}, {'id':"BALANCED", 'name':"Moderate Growth"}, // etc {'id':"NONE", 'name':"None"} ]; The checkboxes are created using an ng-repeat, like this: <div ng-repeat="investmentObjective in fieldMappings.investmentObjectiveMap"> ... </div> However, I needed the values represented by the checkboxes to map to a different model (not just 2-way-bound to the fieldmappings object). To accomplish this, I created a directive, which accepts a destination array destarray which is eventually mapped to the model. I also know I need to handle some very specific gui controls, such as unchecking "None" if anything else gets checked, or checking "None" if everything else gets unchecked. Also, "None" won't be an option in every group of checkboxes, so the directive needs to be generic enough to accept a validation function that can fiddle with the checked state of the checkbox group's inputs based on what's already clicked, but smart enough not to break if there is no option called "NONE". I started to do that by adding an ng-click which invoked a function in the controller, but in looking around Stack Overflow, I read people saying that its bad to put DOM manipulation code inside your controller - it should go in directives. So do I need another directive? So far: (html): <input my-checkbox-group type="checkbox" fieldobj="investmentObjective" ng-click="validationfunc()" validationfunc="clearOnNone()" destarray="investor.investmentObjective" /> Directive code: .directive("myCheckboxGroup", function () { return { restrict: "A", scope: { destarray: "=", // the source of all the checkbox values fieldobj: "=", // the array the values came from validationfunc: "&" // the function to be called for validation (optional) }, link: function (scope, elem, attrs) { if (scope.destarray.indexOf(scope.fieldobj.id) !== -1) { elem[0].checked = true; } elem.bind('click', function () { var index = scope.destarray.indexOf(scope.fieldobj.id); if (elem[0].checked) { if (index === -1) { scope.destarray.push(scope.fieldobj.id); } } else { if (index !== -1) { scope.destarray.splice(index, 1); } } }); } }; }) .js controller snippet: .controller( 'SuitabilityCtrl', ['$scope', function ( $scope ) { $scope.clearOnNone = function() { // naughty jQuery DOM manipulation code that // looks at checkboxes and checks/unchecks as needed }; The above code is done and works fine, except the naughty jquery code in clearOnNone(), which is why I wrote this question. And here is my question: after ALL this, I think to myself - I could be done already if I just manually handled all this GUI logic and validation junk with jQuery written in my controller. At what point does it become foolish to write these complicated directives that future developers will have to puzzle over more than if I had just written jQuery code that 99% of us would understand with a glance? How do other developers draw the line? I see this all over Stack Overflow. For example, this question seems like it could be answered with a dozen lines of straightforward jQuery, yet he has opted to do it the angular way, with a directive and a partial... it seems like a lot of work for a simple problem. Specifically, I suppose I would like to know: how SHOULD I be writing the code that checks whether "None" has been selected (if it exists as an option in this group of checkboxes), and then check/uncheck the other boxes accordingly? A more complex directive? I can't believe I'm the only developer that is having to implement code that is more complex than needed just to satisfy an opinionated framework.

    Read the article

  • Observing flow control idle time in TCP

    - by user12820842
    Previously I described how to observe congestion control strategies during transmission, and here I talked about TCP's sliding window approach for handling flow control on the receive side. A neat trick would now be to put the pieces together and ask the following question - how often is TCP transmission blocked by congestion control (send-side flow control) versus a zero-sized send window (which is the receiver saying it cannot process any more data)? So in effect we are asking whether the size of the receive window of the peer or the congestion control strategy may be sub-optimal. The result of such a problem would be that we have TCP data that we could be transmitting but we are not, potentially effecting throughput. So flow control is in effect: when the congestion window is less than or equal to the amount of bytes outstanding on the connection. We can derive this from args[3]-tcps_snxt - args[3]-tcps_suna, i.e. the difference between the next sequence number to send and the lowest unacknowledged sequence number; and when the window in the TCP segment received is advertised as 0 We time from these events until we send new data (i.e. args[4]-tcp_seq = snxt value when window closes. Here's the script: #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[3]-tcps_snxt - args[3]-tcps_suna) = args[3]-tcps_cwnd / { cwndclosed[args[1]-cs_cid] = timestamp; cwndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / cwndclosed[args[1]-cs_cid] && args[4]-tcp_seq = cwndsnxt[args[1]-cs_cid] / { @meantimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = avg(timestamp - cwndclosed[args[1]-cs_cid]); @stddevtimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = stddev(timestamp - cwndclosed[args[1]-cs_cid]); @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); cwndclosed[args[1]-cs_cid] = 0; cwndsnxt[args[1]-cs_cid] = 0; } tcp:::receive / args[4]-tcp_window == 0 && (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { swndclosed[args[1]-cs_cid] = timestamp; swndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["swnd", args[2]-ip_saddr, args[4]-tcp_dport] = count(); } tcp:::send / swndclosed[args[1]-cs_cid] && args[4]-tcp_seq = swndsnxt[args[1]-cs_cid] / { @meantimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = avg(timestamp - swndclosed[args[1]-cs_cid]); @stddevtimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = stddev(timestamp - swndclosed[args[1]-cs_cid]); swndclosed[args[1]-cs_cid] = 0; swndsnxt[args[1]-cs_cid] = 0; } END { printf("%-6s %-20s %-8s %-25s %-8s %-8s\n", "Window", "Remote host", "Port", "TCP Avg WndClosed(ns)", "StdDev", "Num"); printa("%-6s %-20s %-8d %@-25d %@-8d %@-8d\n", @meantimeclosed, @stddevtimeclosed, @numclosed); } So this script will show us whether the peer's receive window size is preventing flow ("swnd" events) or whether congestion control is limiting flow ("cwnd" events). As an example I traced on a server with a large file transfer in progress via a webserver and with an active ssh connection running "find / -depth -print". Here is the output: ^C Window Remote host Port TCP Avg WndClosed(ns) StdDev Num cwnd 10.175.96.92 80 86064329 77311705 125 cwnd 10.175.96.92 22 122068522 151039669 81 So we see in this case, the congestion window closes 125 times for port 80 connections and 81 times for ssh. The average time the window is closed is 0.086sec for port 80 and 0.12sec for port 22. So if you wish to change congestion control algorithm in Oracle Solaris 11, a useful step may be to see if congestion really is an issue on your network. Scripts like the one posted above can help assess this, but it's worth reiterating that if congestion control is occuring, that's not necessarily a problem that needs fixing. Recall that congestion control is about controlling flow to prevent large-scale drops, so looking at congestion events in isolation doesn't tell us the whole story. For example, are we seeing more congestion events with one control algorithm, but more drops/retransmission with another? As always, it's best to start with measures of throughput and latency before arriving at a specific hypothesis such as "my congestion control algorithm is sub-optimal".

    Read the article

  • Managing common code on Windows 7 (.NET) and Windows 8 (WinRT)

    - by ryanabr
    Recent announcements regarding Windows Phone 8 and the fact that it will have the WinRT behind it might make some of this less painful but I  discovered the "XmlDocument" object is in a new location in WinRT and is almost the same as it's brother in .NET System.Xml.XmlDocument (.NET) Windows.Data.Xml.Dom.XmlDocument (WinRT) The problem I am trying to solve is how to work with both types in the code that performs the same task on both Windows Phone 7 and Windows 8 platforms. The first thing I did was define my own XmlNode and XmlNodeList classes that wrap the actual Microsoft objects so that by using the "#if" compiler directive either work with the WinRT version of the type, or the .NET version from the calling code easily. public class XmlNode     { #if WIN8         public Windows.Data.Xml.Dom.IXmlNode Node { get; set; }         public XmlNode(Windows.Data.Xml.Dom.IXmlNode xmlNode)         {             Node = xmlNode;         } #endif #if !WIN8 public System.Xml.XmlNode Node { get; set ; } public XmlNode(System.Xml.XmlNode xmlNode)         {             Node = xmlNode;         } #endif     } public class XmlNodeList     { #if WIN8         public Windows.Data.Xml.Dom.XmlNodeList List { get; set; }         public int Count {get {return (int)List.Count;}}         public XmlNodeList(Windows.Data.Xml.Dom.XmlNodeList list)         {             List = list;         } #endif #if !WIN8 public System.Xml.XmlNodeList List { get; set ; } public int Count { get { return List.Count;}} public XmlNodeList(System.Xml.XmlNodeList list)         {             List = list;        } #endif     } From there I can then use my XmlNode and XmlNodeList in the calling code with out having to clutter the code with all of the additional #if switches. The challenge after this was the code that worked directly with the XMLDocument object needed to be seperate on both platforms since the method for populating the XmlDocument object is completly different on both platforms. To solve this issue. I made partial classes, one partial class for .NET and one for WinRT. Both projects have Links to the Partial Class that contains the code that is the same for the majority of the class, and the partial class contains the code that is unique to the version of the XmlDocument. The files with the little arrow in the lower left corner denotes 'linked files' and are shared in multiple projects but only exist in one location in source control. You can see that the _Win7 partial class is included directly in the project since it include code that is only for the .NET platform, where as it's cousin the _Win8 (not pictured above) has all of the code specific to the _Win8 platform. In the _Win7 partial class is this code: public partial class WUndergroundViewModel     { public static WUndergroundData GetWeatherData( double lat, double lng)         { WUndergroundData data = new WUndergroundData();             System.Net. WebClient c = new System.Net. WebClient(); string req = "http://api.wunderground.com/api/xxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); XmlDocument doc = new XmlDocument();             doc.Load(c.OpenRead(req)); foreach (XmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.Node.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break ; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break ; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation" )),data); break ;                 }             } return data;         }     } in _win8 partial class is this code: public partial class WUndergroundViewModel     { public async static Task< WUndergroundData > GetWeatherData(double lat, double lng)         { WUndergroundData data = new WUndergroundData (); HttpClient c = new HttpClient (); string req = "http://api.wunderground.com/api/xxxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); HttpResponseMessage msg = await c.GetAsync(req); string stream = await msg.Content.ReadAsStringAsync(); XmlDocument doc = new XmlDocument ();             doc.LoadXml(stream, null); foreach ( IXmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation")), data); break;                 }             } return data;         }     } Summary: This method allows me to have common 'business' code for both platforms that is pretty clean, and I manage the technology differences separately. Thank you tostringtheory for your suggestion, I was considering that approach.

    Read the article

  • Strings in .NET are Enumerable

    - by Scott Dorman
    It seems like there is always some confusion concerning strings in .NET. This is both from developers who are new to the Framework and those that have been working with it for quite some time. Strings in the .NET Framework are represented by the System.String class, which encapsulates the data manipulation, sorting, and searching methods you most commonly perform on string data. In the .NET Framework, you can use System.String (which is the actual type name or the language alias (for C#, string). They are equivalent so use whichever naming convention you prefer but be consistent. Common usage (and my preference) is to use the language alias (string) when referring to the data type and String (the actual type name) when accessing the static members of the class. Many mainstream programming languages (like C and C++) treat strings as a null terminated array of characters. The .NET Framework, however, treats strings as an immutable sequence of Unicode characters which cannot be modified after it has been created. Because strings are immutable, all operations which modify the string contents are actually creating new string instances and returning those. They never modify the original string data. There is one important word in the preceding paragraph which many people tend to miss: sequence. In .NET, strings are treated as a sequence…in fact, they are treated as an enumerable sequence. This can be verified if you look at the class declaration for System.String, as seen below: // Summary:// Represents text as a series of Unicode characters.public sealed class String : IEnumerable, IComparable, IComparable<string>, IEquatable<string> The first interface that String implements is IEnumerable, which has the following definition: // Summary:// Exposes the enumerator, which supports a simple iteration over a non-generic// collection.public interface IEnumerable{ // Summary: // Returns an enumerator that iterates through a collection. // // Returns: // An System.Collections.IEnumerator object that can be used to iterate through // the collection. IEnumerator GetEnumerator();} As a side note, System.Array also implements IEnumerable. Why is that important to know? Simply put, it means that any operation you can perform on an array can also be performed on a string. This allows you to write code such as the following: string s = "The quick brown fox";foreach (var c in s){ System.Diagnostics.Debug.WriteLine(c);}for (int i = 0; i < s.Length; i++){ System.Diagnostics.Debug.WriteLine(s[i]);} If you executed those lines of code in a running application, you would see the following output in the Visual Studio Output window: In the case of a string, these enumerable or array operations return a char (System.Char) rather than a string. That might lead you to believe that you can get around the string immutability restriction by simply treating strings as an array and assigning a new character to a specific index location inside the string, like this: string s = "The quick brown fox";s[2] = 'a';   However, if you were to write such code, the compiler will promptly tell you that you can’t do it: This preserves the notion that strings are immutable and cannot be changed once they are created. (Incidentally, there is no built in way to replace a single character like this. It can be done but it would require converting the string to a character array, changing the appropriate indexed location, and then creating a new string.)

    Read the article

  • Antenna Aligner Part 4: Role'ing in the deep

    - by Chris George
    Since last time I've been trying to sort out the general workflow of the app. It's fundamentally not hard, there is a list of transmitters, you select a transmitter and it shows the compass view. Having done quite a bit of ajax/asp.net/html in the past, I immediately started off by creating two divs within my 'page', one for the list, one for the compass. Then using the onClick event in the list, this will switch the display attribute on the divs. This seemed to work, but did lead to some dodgy transitional redrawing artefacts which I was not happy with. So after some Googling I realised I was doing it all wrong! JQuery mobile has the concept of giving an object in html a data-role. By giving a div the attribute data-role="page" it is then treated as a separate page on the mobile device. Within the code, this is referenced like a html anchor in the form #mypage. Using this system, page transitions such as fade or slide are automatically applied which adds to the whole authenticity of the app! Here is a simple example: . <a href="#'compasspage">compass</a> . <div data-role="page" id="compasspage" data-add-back-btn="true"> But I don't want just a static link, I want to dynamically create my list, and get each list elements to switch to the compass page with the right information. So here is the jquery that I used to dynamically inject new <li> rows into the <ul> block. $('ul').append($('<li/>', {    //here appendin `<li>`     'data-role': "list-divider" }).append($('<a/>', {    //here appending `<a>` into `<li>`     'href': '#compasspage',     'data-transition': 'none',     'onclick': 'selectTx(' + i + ')',     'html': buttonHtml }))); $('ul').listview('refresh'); This is called within a for loop so the first 5 appropriate transmitters are used. There are several things of interest to note here. Firstly, I could not find a more elegant way to tell the target page which transmitter I've clicked on, so I have used the onclick event as well as the href attribute. The onclick event fires 'selectTx' which simply sets a global member variable to the specific index number I've clicked on. Yes it's not nice, but it works. Secondly, the data-transition attribute is set to 'none'. I wanted the transition between the pages to be a whooshy slidey effect. However this worked going to the compass page, but returning to the list page gave some undesirable visual artefacts (flickering, redrawing etc.). So I decided to remove the transitions all together, which was a shame. Thirdly, rather than embedding loads of html into the append command, I removed this out into a variable 'buttonHtml'. Doing this really tidied up my code. Until next time!

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 2/2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} In my earlier blog post, I had described the factors that lead to compliance complexity of financial services processes. In this post, I will outline the business implications of the increasing process compliance complexity and the specific role of BPM in addressing the operational risk reduction objectives of regulatory compliance. First, let’s look at the business implications of increasing complexity of process compliance for financial institutions: · Increased time and cost of compliance due to duplication of effort in conforming to regulatory requirements due to process changes driven by evolving regulatory mandates, shifting business priorities or internal/external audit requirements · Delays in audit reporting due to quality issues in reconciling non-standard process KPIs and integrity concerns arising from the need to rely on multiple data sources for a given process Next, let’s consider some approaches to managing the operational risk of business processes. Financial institutions considering reducing operational risk of their processes, generally speaking, have two choices: · Rip-and-replace existing applications with new off-the shelf applications. · Extend capabilities of existing applications by modeling their data and process interactions, with other applications or user-channels, outside of the application boundary using BPM. The benefit of the first approach is that compliance with new regulatory requirements would be embedded within the boundaries of these applications. However pre-built compliance of any packaged application or custom-built application should not be mistaken as a one-shot fix for future compliance needs. The reason is that business needs and regulatory requirements inevitably out grow end-to-end capabilities of even the most comprehensive packaged or custom-built business application. Thus, processes that originally resided within the application will eventually spill outside the application boundary. It is precisely at such hand-offs between applications or between overlaying processes where vulnerabilities arise to unknown and accidental faults that potentially result in errors and lead to partial or total failure. The gist of the above argument is that processes which reside outside application boundaries, in other words, span multiple applications constitute a latent operational risk that spans the end-to-end value chain. For instance, distortion of data flowing from an account-opening application to a credit-rating system if left un-checked renders compliance with “KYC” policies void even when the “KYC” checklist was enforced at the time of data capture by the account-opening application. Oracle Business Process Management is enabling financial institutions to lower operational risk of such process ”gaps” for Financial Services processes including “Customer On-boarding”, “Quote-to-Contract”, “Deposit/Loan Origination”, “Trade Exceptions”, “Interest Claim Tracking” etc.. If you are faced with a similar challenge and need any guidance on the same feel free to drop me a note.

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Getting Help with 'SEPA' Questions

    - by MargaretW
    What is 'SEPA'? The Single Euro Payments Area (SEPA) is a self-regulatory initiative for the European banking industry championed by the European Commission (EC) and the European Central Bank (ECB). The aim of the SEPA initiative is to improve the efficiency of cross border payments and the economies of scale by developing common standards, procedures, and infrastructure. The SEPA territory currently consists of 33 European countries -- the 28 EU states, together with Iceland, Liechtenstein, Monaco, Norway and Switzerland. Part of that infrastructure includes two new SEPA instruments that were introduced in 2008: SEPA Credit Transfer (a Payables transaction in Oracle EBS) SEPA Core Direct Debit (a Receivables transaction in Oracle EBS) A SEPA Credit Transfer (SCT) is an outgoing payment instrument for the execution of credit transfers in Euro between customer payment accounts located in SEPA. SEPA Credit Transfers are executed on behalf of an Originator holding a payment account with an Originator Bank in favor of a Beneficiary holding a payment account at a Beneficiary Bank. In R12 of Oracle applications, the current SEPA credit transfer implementation is based on Version 5 of the "SEPA Credit Transfer Scheme Customer-To-Bank Implementation Guidelines" and the "SEPA Credit Transfer Scheme Rulebook" issued by European Payments Council (EPC). These guidelines define the rules to be applied to the UNIFI (ISO20022) XML message standards for the implementation of the SEPA Credit Transfers in the customer-to-bank space. This format is compliant with SEPA Credit Transfer version 6. A SEPA Core Direct Debit (SDD) is an incoming payment instrument used for making domestic and cross-border payments within the 33 countries of SEPA, wherein the debtor (payer) authorizes the creditor (payee) to collect the payment from his bank account. The payment can be a fixed amount like a mortgage payment, or variable amounts such as those of invoices. The "SEPA Core Direct Debit" scheme replaces various country-specific direct debit schemes currently prevailing within the SEPA zone. SDD is based on the ISO20022 XML messaging standards, version 5.0 of the "SEPA Core Direct Debit Scheme Rulebook", and "SEPA Direct Debit Core Scheme Customer-to-Bank Implementation Guidelines". This format is also compliant with SEPA Core Direct Debit version 6. EU Regulation #260/2012 established the technical and business requirements for both instruments in euro. The regulation is referred to as the "SEPA end-date regulation", and also defines the deadlines for the migration to the new SEPA instruments: Euro Member States: February 1, 2014 Non-Euro Member States: October 31, 2016. Oracle and SEPA Within the Oracle E-Business Suite of applications, Oracle Payables (AP), Oracle Receivables (AR), and Oracle Payments (IBY) provide SEPA transaction capabilities for the following releases, as noted: Release 11.5.10.x -  AP & AR Release 12.0.x - AP & AR & IBY Release 12.1.x - AP & AR & IBY Release 12.2.x - AP & AR & IBY Resources To assist our customers in migrating, using, and troubleshooting SEPA functionality, a number of resource documents related to SEPA are available on My Oracle Support (MOS), including: R11i: AP: White Paper - SEPA Credit Transfer V5 support in Oracle Payables, Doc ID 1404743.1R11i: AR: White Paper - SEPA Core Direct Debit v5.0 support in Oracle Receivables, Doc ID 1410159.1R12: IBY: White Paper - SEPA Credit Transfer v5 support in Oracle Payments, Doc ID 1404007.1R12: IBY: White Paper - SEPA Core Direct Debit v5 support in Oracle Payments, Doc ID 1420049.1R11i/R12: AP/AR/IBY: Get Help Setting Up, Using, and Troubleshooting SEPA Payments in Oracle, Doc ID 1594441.2R11i/R12: Single European Payments Area (SEPA) - UPDATES, Doc ID 1541718.1R11i/R12: FAQs for Single European Payments Area (SEPA), Doc ID 791226.1

    Read the article

  • SharePoint For Newbie Developers: Code Scope

    - by Mark Rackley
    So, I continue to try to come up with diagrams and information to help new SharePoint developers wrap their heads around this SharePoint beast, especially when those newer to development are on my team. To that end, I drew up the below diagram to help some of our junior devs understand where/when code is being executed in SharePoint at a high level. Note that I say “High Level”… This is a simplistic diagram that can get a LOT more complicated if you want to dive in deeper.  For the purposes of my lesson it served its purpose well. So, please no comments from you peanut gallery about information 3 levels down that’s missing unless it adds to the discussion.  Thanks So, the diagram below details where code is executed on a page load and gives the basic flow of the page load. There are actually many more steps, but again, we are staying high level here. I just know someone is still going to say something like “Well.. actually… the dlls are getting executed when…”  Anyway, here’s the diagram with some information I like to point out: Code Scope / Where it is executed So, looking at the diagram we see that dlls and XSL are executed on the server and that JavaScript/jQuery are executed on the client. This is the main thing I like to point out for the following reasons: XSL (for the most part) is faster than JavaScript I actually get this question a lot. Since XSL is executed on the server less data is getting passed over the wire and a beefier machine (hopefully) is doing the processing. The outcome of course is better performance. When You are using jQuery and making Web Service calls you are building XML strings and sending them to the server, then ALL the results come back and the client machine has to parse through the XML and use what it needs and ignore the rest (and there is a lot of garbage that comes back from SharePoint Web Service calls). XSL and JavaScript cannot work together in the same scope Let me clarify. JavaScript can send data back to SharePoint in postbacks that XSL can then use. XSL can output JavaScript and initiate JavaScript variables.  However, XSL cannot call a JavaScript method to get a value and JavaScript cannot directly interact with XSL and call its templates. They are executed in there scope only. No crossing of boundaries here. So, what does this all mean? Well, nothing too deep. This is just some basic fundamental information that all SharePoint devs need to understand. It will help you determine what is the best solution for your specific development situation and it will help the new guys understand why they get an error when trying to call a JavaScript Function from within XSL.  Let me know if you think quick little blogs like this are helpful or just add to the noise. I could probably put together several more that are similar.  As always, thanks for stopping by, hope you learned something new.

    Read the article

  • The battle between Java vs. C#

    The battle between Java vs. C# has been a big debate amongst the development community over the last few years. Both languages have specific pros and cons based on the needs of a particular project. In general both languages utilize a similar coding syntax that is based on C++, and offer developers similar functionality. This being said, the communities supporting each of these languages are very different. The divide amongst the communities is much like the political divide in America, where the Java community would represent the Democrats and the .Net community would represent the Republicans. The Democratic Party is a proponent of the working class and the general population. Currently, Java is deeply entrenched in the open source community that is distributed freely to anyone who has an interest in using it. Open source communities rely on developers to keep it alive by constantly contributing code to make applications better; essentially they develop code by the community. This is in stark contrast to the C# community that is typically a pay to play community meaning that you must pay for code that you want to use because it is developed as products to be marketed and sold for a profit. This ties back into my reference to the Republicans because they typically represent the needs of business and personal responsibility. This is emphasized by the belief that code is a commodity and that it can be sold for a profit which is in direct conflict to the laissez-faire beliefs of the open source community. Beyond the general differences between Java and C#, they also target two different environments. Java is developed to be environment independent and only requires that users have a Java virtual machine running in order for the java code to execute. C# on the other hand typically targets any system running a windows operating system and has the appropriate version of the .Net Framework installed. However, recently there has been push by a segment of the Open source community based around the Mono project that lets C# code run on other non-windows operating systems. In addition, another feature of C# is that it compiles into an intermediate language, and this is what is executed when the program runs. Because C# is reduced down to an intermediate language called Common Language Runtime (CLR) it can be combined with other languages that are also compiled in to the CLR like Visual Basic (VB) .Net, and F#. The allowance and interaction between multiple languages in the .Net Framework enables projects to utilize existing code bases regardless of the actual syntax because they can be compiled in to CLR and executed as one codebase. As a software engineer I personally feel that it is really important to learn as many languages as you can or at least be open to learn as many languages as you can because no one language will work in every situation.  In some cases Java may be a better choice for a project and others may be C#. It really depends on the requirements of a project and the time constraints. In addition, I feel that is really important to concentrate on understanding the logic of programming and be able to translate business requirements into technical requirements. If you can understand both programming logic and business requirements then deciding which language to use is just basically choosing what syntax to write for a given business problem or need. In regards to code refactoring and dynamic languages it really does not matter. Eventually all projects will be refactored or decommissioned to allow for progress. This is the way of life in the software development industry. The language of a project should not be chosen based on the fact that a project will eventually be refactored because they all will get refactored.

    Read the article

  • Perspective Is Everything

    - by juanlarios
    Sitting on a window seat on my way back from Seattle I looked out the window and saw the large body of water. I was reminded of childhood memories of running as hard as I could through burning hot sand with the anticipation of the splash of the ocean. Looking out the window the water appeared like a sheet draped over land. I couldn’t help but ponder how perspective changes everything.  Over the last several days I had a chance to attend the MVP Summit in Redmond. I had a great time with fellow MVP’s and the SharePoint Product Group. Although I can’t say much about what was discussed and what is coming in the future, I want to share some realizations I had while experiencing the MVP summit.  The SharePoint Product is ever-improving, full of innovation but also a reactionary embodiment of MVP, client and market feedback. There are several features that come to mind that clients complain about where I have felt helpless in informing them that the features are not as mature as they would like it. Together, we figure out a way to make it work and deal with the limitations. It became clear that there are features that have taken a different purpose in the market place from the original vision. The SP Product group is working hard to react to these changes in vision and make SharePoint better for real life implementations.  It is easy to think that SharePoint should be all things to all people. In reality there are products that are very detailed in specific composites, they do this one thing well but severely lack in other areas.  Its easy sometimes to say, “What was Microsoft thinking with this feature?” the Product group is doing all they can to make the moving pieces better and dealing with challenges with having all of them work together.  Sometimes the features don’t fully embody the vision because of the many challenges, but trust me when I say the product group is really focused on delivery and innovation.  As I was speaking with a fellow MVP throughout the session, we spoke about the iPad 2(ironically announced this past week during the MVP summit) and Microsoft’s possible product answer; I realized the days of reactionary products from MS is over. There are many users that will remember Vista and the painful execution in that product, but there has been a lot of success in Windows 7. There was no rush for a reactionary answer to the Nintendo Wii, as a result a ground breaking and game changing product was brought to market, the XBOX –Kinect! I can’t say much here, but it’s safe to say, expect innovation, and execution of products and technology that will change the market instead of react to them!       There are many things I learned and I would love to share that have to do with perspective, technology, etc… but this is far as I can go in details. This might not be new to you or specifically the message that was shared during the summit. These are just my impressions of the event and the spirit of future vision. Great things ahead!

    Read the article

  • Workarounds for supporting MVVM in the Silverlight ContextMenu service

    - by cibrax
    As I discussed in my last post, some of the Silverlight controls does not support MVVM quite well out of the box without specific customizations. The Context Menu is another control that requires customizations for enabling data binding on the menu options. There are a few things that you might want to expose as view model for a menu item, such as the Text, the associated icon or the command that needs to be executed. That view model should look like this, public class MenuItemModel { public string Name { get; set; } public ICommand Command { get; set; } public Image Icon { get; set; } public object CommandParameter { get; set; } } This is how you can modify the built-in control to support data binding on the model above, public class CustomContextMenu : ContextMenu { protected override DependencyObject GetContainerForItemOverride() { CustomMenuItem item = new CustomMenuItem(); Binding commandBinding = new Binding("Command"); item.SetBinding(CustomMenuItem.CommandProperty, commandBinding);   Binding commandParameter = new Binding("CommandParameter"); item.SetBinding(CustomMenuItem.CommandParameterProperty, commandParameter);   return item; } }   public class CustomMenuItem : MenuItem { protected override DependencyObject GetContainerForItemOverride() { CustomMenuItem item = new CustomMenuItem();   Binding commandBinding = new Binding("Command"); item.SetBinding(CustomMenuItem.CommandProperty, commandBinding);   return item; } } The change is very similar to the one I made in the TreeView for manually data binding some of the Menu item properties to the model. Once you applied that change in the control, you can define it in your XAML like this. <toolkit:ContextMenuService.ContextMenu> <e:CustomContextMenu ItemsSource="{Binding MenuItems}"> <e:CustomContextMenu.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal" > <ContentPresenter Margin="0 0 4 0" Content="{Binding Icon}" /> <TextBlock Margin="0" Text="{Binding Name, Mode=OneWay}" FontSize="12"/> </StackPanel> </DataTemplate> </e:CustomContextMenu.ItemTemplate> </e:CustomContextMenu> </toolkit:ContextMenuService.ContextMenu> The property MenuItems associated to the “ItemsSource” in the parent model just returns a list of supported options (menu items) in the context menu. this.menuItems = new MenuItemModel[] { new MenuItemModel { Name = "My Command", Command = new RelayCommand(OnCommandClick), Icon = ImageLoader.GetIcon("command.png") } }; The only problem I found so far with this approach is that the context menu service does not support a HierarchicalDataTemplate in case you want to have an hierarchy in the context menu (MenuItem –> Sub menu items), but I guess we can live without that.

    Read the article

  • Deliberate Practice

    - by Jeff Foster
    It’s easy to assume, as software engineers, that there is little need to “practice” writing code. After all, we write code all day long! Just by writing a little each day, we’re constantly learning and getting better, right? Unfortunately, that’s just not true. Of course, developers do improve with experience. Each time we encounter a problem we’re more likely to avoid it next time. If we’re in a team that deploys software early and often, we hone and improve the deployment process each time we practice it. However, not all practice makes perfect. To develop true expertise requires a particular type of practice, deliberate practice, the only goal of which is to make us better programmers. Everyday software development has other constraints and goals, not least the pressure to deliver. We rarely get the chance in the course of a “sprint” to experiment with potential solutions that are outside our current comfort zone. However, if we believe that software is a craft then it’s our duty to strive continuously to raise the standard of software development. This requires specific and sustained efforts to get better at something we currently can’t do well (from Harvard Business Review July/August 2007). One interesting way to introduce deliberate practice, in a sustainable way, is the code kata. The term kata derives from martial arts and refers to a set of movements practiced either solo or in pairs. One of the better-known examples is the Bowling Game kata by Bob Martin, the goal of which is simply to write some code to do the scoring for 10-pin bowling. It sounds too easy, right? What could we possibly learn from such a simple example? Trust me, though, that it’s not as simple as five minutes of typing and a solution. Of course, we can reach a solution in a short time, but the important thing about code katas is that we explore each technique fully and in a controlled way. We tackle the same problem multiple times, using different techniques and making different decisions, understanding the ramifications of each one, and exploring edge cases. The short feedback loop optimizes opportunities to learn. Another good example is Conway’s Game of Life. It’s a simple problem to solve, but try solving it in a functional style. If you’re used to mutability, solving the problem without mutating state will push you outside of your comfort zone. Similarly, if you try to solve it with the focus of “tell-don’t-ask“, how will the responsibilities of each object change? As software engineers, we don’t get enough opportunities to explore new ideas. In the middle of a development cycle, we can’t suddenly start experimenting on the team’s code base. Code katas offer an opportunity to explore new techniques in a safe environment. If you’re still skeptical, my challenge to you is simply to try it out. Convince a willing colleague to pair with you and work through a kata or two. It only takes an hour and I’m willing to bet you learn a few new things each time. The next step is to make it a sustainable team practice. Start with an hour every Friday afternoon (after all who wants to commit code to production just before they leave for the weekend?) for month and see how that works out. Finally, consider signing up for the Global Day of Code Retreat. It’s like a daylong code kata, it’s on December 8th and there’s probably an event in your area!

    Read the article

  • Something about Property Management or &hellip; the understanding of SharePoint Admins/roles ?!?

    - by Enrique Lima
    When I talk about SharePoint, for some reason it comes to my mind as if it were property management and all the tasks associated with it. So, imagine you have a lot ( a piece of land of sorts), you then decide there is something you want to do with it.  So, you make the choice of having a building built.  Now, in order to go forward with your plan, you need to check what the rules/regulations are.  Has is it been zoned residential, commercial, industrial … you get the idea.  This to me sounds like Governance.  The what am I to do given a defined set of rules. We keep on moving forward based on those rules.  And with this we start the process of building, the building process takes us to survey the land, identify what our boundaries are.  And as we go along we start getting the idea in our head as to what we will do as far as the building goes.  We identify the essentials of the building, basic services and such.  All in all, we plan.  And as with many things we do, we like solid foundations.  What a solid foundation looks like will depend on where and what we build.  The way buildings are built depends in many ways in being able to foresee the potential for natural disasters or to try to leverage the lay of the land.  Sound familiar?  We have done our Requirements Gathering. We have the building in place, we have followed the zoning rules, we have implemented services.  But we need someone to manage the building, now we move on to the human side of the story.  We want to establish a means to normalcy in the building, someone that can be the monitoring agent as to the “what’s going on?” of it.  This person will be tasked with making sure all basic services are functional, that measures are taken if there is an issue and so on.  Enter the Farm Administrator. In a way, we establish an extension of the rules to make sure the building and the apartments/offices build follow a standard set of rules too. Now, in turn you will have people leasing or buying the apartments/offices, they will be the keepers of that space.  So, now we are building sites, we have moved from having the building (farm) ready, to leasing/selling offices/apartments (site collections).  There will be someone assuming responsibility for those offices, that person will authorize or be informed about activities and also who not only gets a code into the building, but perhaps a key to the office.  Enter Site Collection Administrator.  And then perhaps we move on to the person that would be responsible for specifics within the office, for example a Human Resources Manager or Coordinator.  They will have specific control and knowledge about people.  A facilities coordinator, and so on.  I would translate that into Site Administrators. With that said then, we identify the following: Role Name Responsibility (but not limited to) Farm Administrator Infrastructure Site Collection Admin Policies for Content, Hierarchy, Recycle Bin, Security and Access Site Owner (Site Admin) Security and Access, Training, Guidance, Manage Templates All in all there are different levels of responsibility to be handled, but it is very important to understand what they are and what they mean. Here is a link to very well laid out explanation on this … http://www.endusersharepoint.com/2009/08/11/site-managers-and-end-user-expectations-roles-and-responsibilities/

    Read the article

  • Find Knowledge Quickly

    - by Get Proactive Customer Adoption Team
    Untitled Document Get to relevant knowledge on the Oracle products you use in a few quick steps! Customers tell us that the volume of search results returned can make it difficult to find the information they need, especially when similar Oracle products exist. These simple tips show you how to filter, browse, search, and refine your results to get relevant answers faster. Filter first: PowerView is your best friend Powerview is an often ignored feature of My Oracle Support that enables you to control the information displayed on the Dashboard, the Knowledge tab and regions, and the Service Request tab based on one or more parameters. You can define a PowerView to limit information based on product, product line, support ID, platform, hostname, system name and others. Using PowerView allows you to restrict: Your search results to the filters you have set The product list when selecting your products in Search & Browse and when creating service requests   The PowerView menu is at the top of My Oracle Support, near the title You turn PowerView on by clicking PowerView is Off, which is a button. When PowerView is On, and filters are active, clicking the button again will toggle Powerview off. Click the arrow to the right to create new filters, edit filters, remove a filter, or choose from the list of previously created filters. You can create a PowerView in 3 simple steps! Turn PowerView on and select New from the PowerView menu. Select your filter from the Select Filter Type dropdown list and make selections from the other two menus. Hint: While there are many filter options, selecting your product line or your list of products will provide you with an effective filter. Click the plus sign (+) to add more filters. Click the minus sign (-) to remove a filter. Click Create to save and activated the filter(s) You’ll notice that PowerView is On displays along with the active filters. For more information about the PowerView capabilities, click the Learn more about PowerView… menu item or view a short video. Browse & Refine: Access the Best Match Fast For Your Product and Task In the Knowledge Browse region of the Knowledge or Dashboard tabs, pick your product, pick your task, select a version, if applicable. A best match document – a collection of knowledge articles and resources specific to your selections - may display, offering you a one-stop shop. The best match document, called an “information center,” is an aggregate of dynamically updated links to information pertinent to the product, task, and version (if applicable) you chose. These documents are refreshed every 24 hours to ensure that you have the most current information at your fingertips. Note: Not all products have “information centers.” If no information center appears as a best match, click Search to see a list of search results. From the information center, you can access topics from a product overview to security information, as shown in the left menu. Just want to search? That’s easy too! Again, pick your product, pick your task, select a version, if applicable, enter a keyword term, and click Search. Hint: In this example, you’ll notice that PowerView is on and set to PeopleSoft Enterprise. When PowerView is on and you select a product from the Knowledge Base product list, the listed products are limited to the active PowerView filter. (Products you’ve previously picked are also listed at the top of the dropdown list.) Your search results are displayed based on the parameters you entered. It’s that simple! Related Information: My Oracle Support - User Resource Center [ID 873313.1] My Oracle Support Community For more tips on using My Oracle Support, check out these short video training modules. My Oracle Support Speed Video Training [ID 603505.1]

    Read the article

  • Identity in .NET 4.5&ndash;Part 2: Claims Transformation in ASP.NET (Beta 1)

    - by Your DisplayName here!
    In my last post I described how every identity in .NET 4.5 is now claims-based. If you are coming from WIF you might think, great – how do I transform those claims? Sidebar: What is claims transformation? One of the most essential features of WIF (and .NET 4.5) is the ability to transform credentials (or tokens) to claims. During that process the “low level” token details are turned into claims. An example would be a Windows token – it contains things like the name of the user and to which groups he belongs to. That information will be surfaced as claims of type Name and GroupSid. Forms users will be represented as a Name claim (all the other claims that WIF provided for FormsIdentity are gone in 4.5). The issue here is, that your applications typically don’t care about those low level details, but rather about “what’s the purchase limit of alice”. The process of turning the low level claims into application specific ones is called claims transformation. In pre-claims times this would have been done by a combination of Forms Authentication extensibility, role manager and maybe ASP.NET profile. With claims transformation all your identity gathering code is in one place (and the outcome can be cached in a single place as opposed to multiple ones). The structural class to do claims transformation is called ClaimsAuthenticationManager. This class has two purposes – first looking at the incoming (low level) principal and making sure all required information about the user is present. This is your first chance to reject a request. And second – modeling identity information in a way it is relevant for the application (see also here). This class gets called (when present) during the pipeline when using WS-Federation. But not when using the standard .NET principals. I am not sure why – maybe because it is beta 1. Anyhow, a number of people asked me about it, and the following is a little HTTP module that brings that feature back in 4.5. public class ClaimsTransformationHttpModule : IHttpModule {     public void Dispose()     { }     public void Init(HttpApplication context)     {         context.PostAuthenticateRequest += Context_PostAuthenticateRequest;     }     void Context_PostAuthenticateRequest(object sender, EventArgs e)     {         var context = ((HttpApplication)sender).Context;         // no need to call transformation if session already exists         if (FederatedAuthentication.SessionAuthenticationModule != null &&             FederatedAuthentication.SessionAuthenticationModule.ContainsSessionTokenCookie(context.Request.Cookies))         {             return;         }         var transformer = FederatedAuthentication.FederationConfiguration.IdentityConfiguration.ClaimsAuthenticationManager;         if (transformer != null)         {             var transformedPrincipal = transformer.Authenticate(context.Request.RawUrl, context.User as ClaimsPrincipal);             context.User = transformedPrincipal;             Thread.CurrentPrincipal = transformedPrincipal;         }     } } HTH

    Read the article

  • Move over DFS and Robocopy, here is SyncToy!

    - by andywe
    Ever since Windows 2000, I have always had the need to replicate data to multiple endpoints with the same content. Until DFS was introduced, the method of thinking was to either manually copy the data location by location, or to batch script it with xcopy and schedule a task. Even though this worked (and still does today), it was cumbersome, and intensive on the network, especially when dealing with larger amounts of data. Then along came robocopy, as an internal tool written by an enterprising programmer at Microsoft. We used it quite a bit, especially when we could not use DFS in the early days. It was received so well, it made it into the public realm. At least now we could have the ability to determine what files had changed and only replicate those. Well, over time there has been evolution of this ideal. DFS is obviously the Windows enterprise class service to do this, along with BrancheCache..however you don’t always need or want the power of DFS, especially when it comes to small datacenter installations, or remote offices. I have specific data sets that are on closed or restricted networks, that either have a security need for this, or are in remote countries where bandwidth is a premium. FOr this, I use the latest evolution for one off replication names Synctoy. Synctoy is from Microsoft, seemingly released in 2009, that wraps a nice GUI around setting up a paired set of folders (remember the mobile briefcase from Windows 98?), and allowing you the choice of synchronization methods. 1 way, or 2 way. Simply create a paired set of folders on the source and destination, choose your options for content, exclude any file types you don’t want to replicate, and click run. Scheduling is even easier. MS has included a wrapper for doing just this so all you enter in your task schedule in the SynToyCMD.exe, a –R as an argument, and the time schedule. No more complicated command lines or scripts.   I find this especially useful when I use MS backup to back up a system volume, but only want subsets of backup information of a data share and ONLY when that dataset has changed. Not relying on full backups and incremental. An example of this is my application installation master share. I back this up with SyncToy because I do not need multiple backup copies..one copy elsewhere suffices to back it up. At home, very useful for your pictures, videos, music, ect..the backup is online and ready to access, not waiting for you to restore a backup file, and no need to institute a domain simply to have DFS.'   Do note there is a risk..if you accidently delete a file and do not catch this before the next sync, then depending on your SyncToy settings, you can indeed lose that file as the destination updates..so due diligence applies. I make it a rule to sync manly one way…I use my master share for making changes, and allow the schedule to follow suit. Any real important file I lock down as read only through file permissions so it cannot be deleted unless I intervene.   Check out the tool and have some fun! http://www.microsoft.com/en-us/download/details.aspx?DisplayLang=en&id=15155

    Read the article

  • Subterranean IL: Exception handling 2

    - by Simon Cooper
    Control flow in and around exception handlers is tightly controlled, due to the various ways the handler blocks can be executed. To start off with, I'll describe what SEH does when an exception is thrown. Handling exceptions When an exception is thrown, the CLR stops program execution at the throw statement and searches up the call stack looking for an appropriate handler; catch clauses are analyzed, and filter blocks are executed (I'll be looking at filter blocks in a later post). Then, when an appropriate catch or filter handler is found, the stack is unwound to that handler, executing successive finally and fault handlers in their own stack contexts along the way, and program execution continues at the start of the catch handler. Because catch, fault, finally and filter blocks can be executed essentially out of the blue by the SEH mechanism, without any reference to preceding instructions, you can't use arbitary branches in and out of exception handler blocks. Instead, you need to use specific instructions for control flow out of handler blocks: leave, endfinally/endfault, and endfilter. Exception handler control flow try blocks You cannot branch into or out of a try block or its handler using normal control flow instructions. The only way of entering a try block is by either falling through from preceding instructions, or by branching to the first instruction in the block. Once you are inside a try block, you can only leave it by throwing an exception or using the leave <label> instruction to jump to somewhere outside the block and its handler. The leave instructions signals the CLR to execute any finally handlers around the block. Most importantly, you cannot fall out of the block, and you cannot use a ret to return from the containing method (unlike in C#); you have to use leave to branch to a ret elsewhere in the method. As a side effect, leave empties the stack. catch blocks The only way of entering a catch block is if it is run by the SEH. At the start of the block execution, the thrown exception will be the only thing on the stack. The only way of leaving a catch block is to use throw, rethrow, or leave, in a similar way to try blocks. However, one thing you can do is use a leave to branch back to an arbitary place in the handler's try block! In other words, you can do this: .try { // ... newobj instance void [mscorlib]System.Exception::.ctor() throw MidTry: // ... leave.s RestOfMethod } catch [mscorlib]System.Exception { // ... leave.s MidTry } RestOfMethod: // ... As far as I know, this mechanism is not exposed in C# or VB. finally/fault blocks The only way of entering a finally or fault block is via the SEH, either as the result of a leave instruction in the corresponding try block, or as part of handling an exception. The only way to leave a finally or fault block is to use endfinally or endfault (both compile to the same binary representation), which continues execution after the finally/fault block, or, if the block was executed as part of handling an exception, signals that the SEH can continue walking the stack. filter blocks I'll be covering filters in a separate blog posts. They're quite different to the others, and have their own special semantics. Phew! Complicated stuff, but it's important to know if you're writing or outputting exception handlers in IL. Dealing with the C# compiler is probably best saved for the next post.

    Read the article

  • Security Access Control With Solaris Virtualization

    - by Thierry Manfe-Oracle
    Numerous Solaris customers consolidate multiple applications or servers on a single platform. The resulting configuration consists of many environments hosted on a single infrastructure and security constraints sometimes exist between these environments. Recently, a customer consolidated many virtual machines belonging to both their Intranet and Extranet on a pair of SPARC Solaris servers interconnected through Infiniband. Virtual Machines were mapped to Solaris Zones and one security constraint was to prevent SSH connections between the Intranet and the Extranet. This case study gives us the opportunity to understand how the Oracle Solaris Network Virtualization Technology —a.k.a. Project Crossbow— can be used to control outbound traffic from Solaris Zones. Solaris Zones from both the Intranet and Extranet use an Infiniband network to access a ZFS Storage Appliance that exports NFS shares. Solaris global zones on both SPARC servers mount iSCSI LU exported by the Storage Appliance.  Non-global zones are installed on these iSCSI LU. With no security hardening, if an Extranet zone gets compromised, the attacker could try to use the Storage Appliance as a gateway to the Intranet zones, or even worse, to the global zones as all the zones are reachable from this node. One solution consists in using Solaris Network Virtualization Technology to stop outbound SSH traffic from the Solaris Zones. The virtualized network stack provides per-network link flows. A flow classifies network traffic on a specific link. As an example, on the network link used by a Solaris Zone to connect to the Infiniband, a flow can be created for TCP traffic on port 22, thereby a flow for the ssh traffic. A bandwidth can be specified for that flow and, if set to zero, the traffic is blocked. Last but not least, flows are created from the global zone, which means that even with root privileges in a Solaris zone an attacker cannot disable or delete a flow. With the flow approach, the outbound traffic of a Solaris zone is controlled from outside the zone. Schema 1 describes the new network setting once the security has been put in place. Here are the instructions to create a Crossbow flow as used in Schema 1 : (GZ)# zoneadm -z zonename halt ...halts the Solaris Zone. (GZ)# flowadm add-flow -l iblink -a transport=TCP,remote_port=22 -p maxbw=0 sshFilter  ...creates a flow on the IB partition "iblink" used by the zone to connect to the Infiniband.  This IB partition can be identified by intersecting the output of the commands 'zonecfg -z zonename info net' and 'dladm show-part'.  The flow is created on port 22, for the TCP traffic with a zero maximum bandwidth.  The name given to the flow is "sshFilter". (GZ)# zoneadm -z zonename boot  ...restarts the Solaris zone now that the flow is in place.Solaris Zones and Solaris Network Virtualization enable SSH access control on Infiniband (and on Ethernet) without the extra cost of a firewall. With this approach, no change is required on the Infiniband switch. All the security enforcements are put in place at the Solaris level, minimizing the impact on the overall infrastructure. The Crossbow flows come in addition to many other security controls available with Oracle Solaris such as IPFilter and Role Based Access Control, and that can be used to tackle security challenges.

    Read the article

  • I am trying to figure out the best way to understand how to cache domain objects

    - by Brett Ryan
    I've always done this wrong, I'm sure a lot of others have too, hold a reference via a map and write through to DB etc.. I need to do this right, and I just don't know how to go about it. I know how I want my objects to be cached but not sure on how to achieve it. What complicates things is that I need to do this for a legacy system where the DB can change without notice to my application. So in the context of a web application, let's say I have a WidgetService which has several methods: Widget getWidget(); Collection<Widget> getAllWidgets(); Collection<Widget> getWidgetsByCategory(String categoryCode); Collection<Widget> getWidgetsByContainer(Integer parentContainer); Collection<Widget> getWidgetsByStatus(String status); Given this, I could decide to cache by method signature, i.e. getWidgetsByCategory("AA") would have a single cache entry, or I could cache widgets individually, which would be difficult I believe; OR, a call to any method would then first cache ALL widgets with a call to getAllWidgets() but getAllWidgets() would produce caches that match all the keys for the other method invocations. For example, take the following untested theoretical code. Collection<Widget> getAllWidgets() { Entity entity = cache.get("ALL_WIDGETS"); Collection<Widget> res; if (entity == null) { res = loadCache(); } else { res = (Collection<Widget>) entity.getValue(); } return res } Collection<Widget> loadCache() { // Get widgets from underlying DB Collection<Widget> res = db.getAllWidgets(); cache.put("ALL_WIDGETS", res); Map<String, List<Widget>> byCat = new HashMap<>(); for (Widget w : res) { // cache by different types of method calls, i.e. by category if (!byCat.containsKey(widget.getCategory()) { byCat.put(widget.getCategory(), new ArrayList<Widget>); } byCat.get(widget.getCatgory(), widget); } cacheCategories(byCat); return res; } Collection<Widget> getWidgetsByCategory(String categoryCode) { CategoryCacheKey key = new CategoryCacheKey(categoryCode); Entity ent = cache.get(key); if (entity == null) { loadCache(); } ent = cache.get(key); return ent == null ? Collections.emptyList() : (Collection<Widget>)ent.getValue(); } NOTE: I have not worked with a cache manager, the above code illustrates cache as some object that may hold caches by key/value pairs, though it's not modelled on any specific implementation. Using this I have the benefit of being able to cache all objects in the different ways they will be called with only single objects on the heap, whereas if I were to cache the method call invocation via say Spring It would (I believe) cache multiple copies of the objects. I really wish to try and understand the best ways to cache domain objects before I go down the wrong path and make it harder for myself later. I have read the documentation on the Ehcache website and found various articles of interest, but nothing to give a good solid technique. Since I'm working with an ERP system, some DB calls are very complicated, not that the DB is slow, but the business representation of the domain objects makes it very clumsy, coupled with the fact that there are actually 11 different DB's where information can be contained that this application is consolidating in a single view, this makes caching quite important.

    Read the article

  • Would this be a good web application architecture?

    - by Gustav Bertram
    My problem Our MVC based framework does not allow us to cache only part of our output. Ideally we want to cahce static and semi-static bits, and run dynamic bits. In addition, we need to consider data caching that reacts to database changes. My idea The concept I came up with was to represent a page as a tree of XML fragment objects. (I say XML, but I mean XHTML). Some of the fragments are dynamic, and can pull their data directly from models or other sources, but most of the fragments are static scaffolding. If a subtree of fragments is completely static, then I imagine that they could unfold into pure XML that would then be cached as the text representation of their parent element. This process would ideally continue until we are left with a root element that contains all of the static XML, and has a couple of dynamic XML fragments that are resolved and attached to the relevant nodes of the XML tree just before the page is displayed. In addition to separating content into dynamic and static fragments, some fragments could be dynamic and cached. A simple expiry time which propagates up through the XML fragment tree would indicate that a specific fragment should periodically be refreshed. A newspaper section or front page does not need to be updated each second. Minutes or sometimes even longer is sufficient. Other fragments would be dynamic and uncached. Typically too many articles are viewed for them to be cached - the cache would overflow. Some individual articles may be cached if they are extremely popular. Functional notes The folding mechanism could be to be smart enough to judge when it would be more profitable to fold a dynamic cached fragment and propagate the expiry date to the parent fragment, or to keep it separate and simple attach to the XML tree when resolving the page. If some dynamic cached fragments are associated to database objects through mechanisms like a globally unique content id, then changes to the database could trigger changes to the output cache. If fragments store the identifiers of parent fragments, then they could trigger a refolding process that would then include the updated data. A set of pure XML with an ordered array of fragment objects (that each store the identifying information of the node to which they should be attached), can be resolved in a fairly simple way by walking the XML tree, and merging the data from the fragments. Because it is not necessary to parse and construct the entire tree in memory before attaching nodes, processing should be fairly fast. The identifiers of each fragment would be a combination of relevant identity data and the type of fragment object. Cached parent fragments would contain references to these identifiers, in order to then either pull them from the fragment cache, or to run their code. The controller's responsibility is reduced to making changes to the database, and telling the root XML fragment object to render itself. The Question My question has two parts: Is this a good design? Are there any obvious flaws I'm missing? Has somebody else thought of this before? References? Is there an existing alternative that I should consider? A cool templating engine maybe?

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >