Search Results

Search found 9026 results on 362 pages for 'vs extensibility'.

Page 3/362 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • C#: String Concatenation vs Format vs StringBuilder

    - by James Michael Hare
    I was looking through my groups’ C# coding standards the other day and there were a couple of legacy items in there that caught my eye.  They had been passed down from committee to committee so many times that no one even thought to second guess and try them for a long time.  It’s yet another example of how micro-optimizations can often get the best of us and cause us to write code that is not as maintainable as it could be for the sake of squeezing an extra ounce of performance out of our software. So the two standards in question were these, in paraphrase: Prefer StringBuilder or string.Format() to string concatenation. Prefer string.Equals() with case-insensitive option to string.ToUpper().Equals(). Now some of you may already know what my results are going to show, as these items have been compared before on many blogs, but I think it’s always worth repeating and trying these yourself.  So let’s dig in. The first test was a pretty standard one.  When concattenating strings, what is the best choice: StringBuilder, string concattenation, or string.Format()? So before we being I read in a number of iterations from the console and a length of each string to generate.  Then I generate that many random strings of the given length and an array to hold the results.  Why am I so keen to keep the results?  Because I want to be able to snapshot the memory and don’t want garbage collection to collect the strings, hence the array to keep hold of them.  I also didn’t want the random strings to be part of the allocation, so I pre-allocate them and the array up front before the snapshot.  So in the code snippets below: num – Number of iterations. strings – Array of randomly generated strings. results – Array to hold the results of the concatenation tests. timer – A System.Diagnostics.Stopwatch() instance to time code execution. start – Beginning memory size. stop – Ending memory size. after – Memory size after final GC. So first, let’s look at the concatenation loop: 1: // build num strings using concattenation. 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = "This is test #" + i + " with a result of " + strings[i]; 5: } Pretty standard, right?  Next for string.Format(): 1: // build strings using string.Format() 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = string.Format("This is test #{0} with a result of {1}", i, strings[i]); 5: }   Finally, StringBuilder: 1: // build strings using StringBuilder 2: for (int i = 0; i < num; i++) 3: { 4: var builder = new StringBuilder(); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: } So I take each of these loops, and time them by using a block like this: 1: // get the total amount of memory used, true tells it to run GC first. 2: start = System.GC.GetTotalMemory(true); 3:  4: // restart the timer 5: timer.Reset(); 6: timer.Start(); 7:  8: // *** code to time and measure goes here. *** 9:  10: // get the current amount of memory, stop the timer, then get memory after GC. 11: stop = System.GC.GetTotalMemory(false); 12: timer.Stop(); 13: other = System.GC.GetTotalMemory(true); So let’s look at what happens when I run each of these blocks through the timer and memory check at 500,000 iterations: 1: Operator + - Time: 547, Memory: 56104540/55595960 - 500000 2: string.Format() - Time: 749, Memory: 57295812/55595960 - 500000 3: StringBuilder - Time: 608, Memory: 55312888/55595960 – 500000   Egad!  string.Format brings up the rear and + triumphs, well, at least in terms of speed.  The concat burns more memory than StringBuilder but less than string.Format().  This shows two main things: StringBuilder is not always the panacea many think it is. The difference between any of the three is miniscule! The second point is extremely important!  You will often here people who will grasp at results and say, “look, operator + is 10% faster than StringBuilder so always use StringBuilder.”  Statements like this are a disservice and often misleading.  For example, if I had a good guess at what the size of the string would be, I could have preallocated my StringBuffer like so:   1: for (int i = 0; i < num; i++) 2: { 3: // pre-declare StringBuilder to have 100 char buffer. 4: var builder = new StringBuilder(100); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: }   Now let’s look at the times: 1: Operator + - Time: 551, Memory: 56104412/55595960 - 500000 2: string.Format() - Time: 753, Memory: 57296484/55595960 - 500000 3: StringBuilder - Time: 525, Memory: 59779156/55595960 - 500000   Whoa!  All of the sudden StringBuilder is back on top again!  But notice, it takes more memory now.  This makes perfect sense if you examine the IL behind the scenes.  Whenever you do a string concat (+) in your code, it examines the lengths of the arguments and creates a StringBuilder behind the scenes of the appropriate size for you. But even IF we know the approximate size of our StringBuilder, look how much less readable it is!  That’s why I feel you should always take into account both readability and performance.  After all, consider all these timings are over 500,000 iterations.   That’s at best  0.0004 ms difference per call which is neglidgable at best.  The key is to pick the best tool for the job.  What do I mean?  Consider these awesome words of wisdom: Concatenate (+) is best at concatenating.  StringBuilder is best when you need to building. Format is best at formatting. Totally Earth-shattering, right!  But if you consider it carefully, it actually has a lot of beauty in it’s simplicity.  Remember, there is no magic bullet.  If one of these always beat the others we’d only have one and not three choices. The fact is, the concattenation operator (+) has been optimized for speed and looks the cleanest for joining together a known set of strings in the simplest manner possible. StringBuilder, on the other hand, excels when you need to build a string of inderterminant length.  Use it in those times when you are looping till you hit a stop condition and building a result and it won’t steer you wrong. String.Format seems to be the looser from the stats, but consider which of these is more readable.  Yes, ignore the fact that you could do this with ToString() on a DateTime.  1: // build a date via concatenation 2: var date1 = (month < 10 ? string.Empty : "0") + month + '/' 3: + (day < 10 ? string.Empty : "0") + '/' + year; 4:  5: // build a date via string builder 6: var builder = new StringBuilder(10); 7: if (month < 10) builder.Append('0'); 8: builder.Append(month); 9: builder.Append('/'); 10: if (day < 10) builder.Append('0'); 11: builder.Append(day); 12: builder.Append('/'); 13: builder.Append(year); 14: var date2 = builder.ToString(); 15:  16: // build a date via string.Format 17: var date3 = string.Format("{0:00}/{1:00}/{2:0000}", month, day, year); 18:  So the strength in string.Format is that it makes constructing a formatted string easy to read.  Yes, it’s slower, but look at how much more elegant it is to do zero-padding and anything else string.Format does. So my lesson is, don’t look for the silver bullet!  Choose the best tool.  Micro-optimization almost always bites you in the end because you’re sacrificing readability for performance, which is almost exactly the wrong choice 90% of the time. I love the rules of optimization.  They’ve been stated before in many forms, but here’s how I always remember them: For Beginners: Do not optimize. For Experts: Do not optimize yet. It’s so true.  Most of the time on today’s modern hardware, a micro-second optimization at the sake of readability will net you nothing because it won’t be your bottleneck.  Code for readability, choose the best tool for the job which will usually be the most readable and maintainable as well.  Then, and only then, if you need that extra performance boost after profiling your code and exhausting all other options… then you can start to think about optimizing.

    Read the article

  • Podcasting vs Stack Overflow vs Geekswithblogs

    - by MarkPearl
    For a few years now I have been looking for effective ways to be involved in the “community”. While there are a few community programming events in my area (Johannesberg), there isn’t too much face to face stuff – which has caused me to turn to the internet. My internet attempts have been varied – at first I took the passive approach of listening to tech podcasts. This was great for a while, but soon the content became semi-repetitive and a little boring. It seemed that the podcasts I was listening to all went round the same themes and speakers and while I am still a keen listener to several tech podcasts – it didn’t quench my thirst. So I began to be a bit more active – starting with stack overflow – where I would scan the site for questions that were in the realm of my ability to answer. It worked for a while but soon it began to be discouraging – there seems to be so many people that know so much more than me and are quicker at typing that I felt fairly ineffective. So while I still use Stack Overflow when I am in a pickle and need some help – it feels more like me taking from the community than giving anything. Which brought me to Geeks with blogs. Till I found GWB I hadn’t felt like I was an active part of a community. I had blogged before on Blogspot and Wordpress but hadn’t felt associated to the community. Now when I get a comment from someone on one of my GWB posts either thanking me or adding a bit more or correcting me, it makes me feel like I am contributing to a community. So well done GWB. Thanks for making a spot that makes me feel at home!

    Read the article

  • What are the benefits and drawback of documentation vs tutorials vs video tutorials [closed]

    - by Cat
    Which types of learning resources do you find the most helpful, for which kinds of learning and/or perhaps at specific times? Some examples of types of learning you could consider: When starting to integrate a new SDK inside an existing codebase When learning a new framework without having to integrate legacy code When digging deeper into an already-used SDK that you may not know very well yet For example - (video) tutorials are usually very easy to follow and tells a story from beginning to end to get results, but will nearly always assume starting from scratch or a previous tutorial. Therefore such a resource is useful for quick learning if you don't have legacy code around, but less so if you have to search for the best-fit to the code you already have. SDK Documentation on the other hand is well-structured but does not tell a story. It is more difficult to get to a specific larger result with documentation alone, but it is a better fit when you do have legacy code around and are searching for perhaps non-obvious ways of employing the SDK or library. Are there other forms of resources that you find useful, such as interactive training?

    Read the article

  • LSP vs OCP / Liskov Substitution VS Open Close

    - by Kolyunya
    I am trying to understand the SOLID principles of OOP and I've come to the conclusion that LSP and OCP have some similarities (if not to say more). the open/closed principle states "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification". LSP in simple words states that any instance of Foo can be replaced with any instance of Bar which is derived from Foo and the program will work the same very way. I'm not a pro OOP programmer, but it seems to me that LSP is only possible if Bar, derived from Foo does not change anything in it but only extends it. That means that in particular program LSP is true only when OCP is true and OCP is true only if LSP is true. That means that they are equal. Correct me if I'm wrong. I really want to understand these ideas. Great thanks for an answer.

    Read the article

  • Coming Soon: Development and Extensibility Handbook from Oracle Press

    - by Oliver Steinmeier
    I had hoped to get my hands on a copy at OpenWorld, but it wasn't available yet from the printers.  But it's coming soon: The Oracle Fusion Applications Development and Extensibility Handbook. This book is promising to be a great resource for anyone interested in learning about our favorite topic.  And while I haven't read it yet, a look at the cover page image tells me that it's going to be a high-quality book.  That's because I have known one of the authors, Dhaval Mehta, for many years.  He recently left Oracle development for new challenges, but until then he was widely known as one of the most knowledgable Fusion Applications engineers.  And his co-authors have equally strong and complementary backgrounds.Here's what the book covers: Explore Oracle Fusion Applications components and architecture Plan, develop, debug, and deploy customizations Extend out-of-the-box functionality with Oracle JDeveloper Modify web applications using Oracle Composer Incorporate Oracle SOA Suite 11g composites Validate code through sandboxes and test environments Secure data using authorization, authentication, and encryption Design and distribute personalized BI reports Automate jobs with Oracle Enterprise Scheduler Change appearance and branding of your applications with the Oracle ADF Skin Editor   Expect a more detailed review of the book when it his your local bookseller's shelves (or Amazon).

    Read the article

  • Ubuntu 12.04.3 - Graphics Driver: Default vs Nvidia 319-recommended vs Nvidia 319-updated

    - by Navraj
    Background: I switched from default driver to Nvidia-319-recommended. I am guessing that this update has caused issues with Keyboard shortcuts, battery status icon disappearing as well as power management issues as speculated by others. Closing laptop lid no longer suspends laptop - It has to be manually done by licking 'suspend' before closing lid. Question: How do you restore the original/default graphics driver? Thanks for your help. Regards

    Read the article

  • The NEW Oracle Enterprise Manager Extensibility Exchange

    - by Joe Diemer
    Oracle Enterprise Manager continues to expand its Eco-system with the NEW Extensibility Exchange! The Exchange offers a searchable listing of Enterprise Manager entities. Today it’s stocked with plug-ins and connectors for Enterprise Manager 12c and 11g. Anyone - partners, customers, ACE community members, anyone - can post an entity subject to approval of course. So in addition to plug-ins and connectors, the Exchange will have best practices, deployment procedures, templates, and essentially any Enterprise Manager entity that’s relevant. The Exchange provides Development Resources to guide contributors in the creation of plug-ins and connectors. A Community Resources page features plug-ins validated through the Oracle Validate Integration program as well as some other contributions important to customers.  You can also discover ways to get more involved with Enterprise Manager through the user and partner communities. The Exchange was announced in the October 2nd Enterprise Manager Partner Press Release  and is being presented at Oracle OpenWorld 2012 during the following sessions:    •    “Using Oracle Enterprise Manager to Manage Your Own Private Cloud” General Session – Tuesday Oct 2nd    •    “Managing Heterogeneous Environments with Oracle Enterprise Manager” Conference Session – Tuesday Oct 2nd    •    “Using Management Already Built into Oracle Products: Oracle Enterprise Manager” Oracle Partner Network Exchange Session – Wednesday Oct 3rd Check it out at http://www.oracle.com/goto/emextensibility, and let us know what you think by posting a comment below or clicking the "Forum" button at the Exchange itself.

    Read the article

  • Naming your website longname.com vs shortcatchy.net vs shortcatchy.info

    - by jskye
    I'm designing a website that will basically be a social network for sharing information. I have the domain $$$$d.net and the same domain $$$$d.info where $$$$ is a word (that runs into the d) pertaining to the purpose of the site . The .com of this domain was already taken, but they've got nothing showing. They only have a not reached google error showing ie. dont seem to be trying to sell it either. I also have the long name of the site $$$$------&&&&&&&&&.com where the words $$$$ and &&&&&&&&& would contribute relevant seo to the site. In fact the word $$$$------ would also if a one letter spelling mistake is recognised at all by google, which i doubt but am unsure about. But as a brandname the $$$$------ word still works relevantly. Which do you think is a better choice to use? The short catchy name with the .info for relevance to information The .net which is more familiar than .info but slightly less relevant maybe. (But i think net as in network still works cos as i said it will be a social networking site). The long, .com domain which has more SEO plus a pun albeit on a spelling mistake. I know its kind of a subjective question and also hard to answer without knowing the name (which I've obfuscated because I'm only in initial design stage) but nevertheless im interested in what some of you guys think.

    Read the article

  • H1 vs H2 vs Other for website title/logo and SEO

    - by Ilian Iliev
    It is a common practice for front-end developers to put the website title or logo in H1 tag and the title in H2. But most of the time the title of the page/article is more important because it caries the content value. So my question is what is the best approac from semantic and seo viewpoint. Examples: logo - H1, title - H1 logo - H1, title - H2 logo - H2, title - H1 logo - other tag, title - H1 Provided other variants if you think they will have bigger effect.

    Read the article

  • CVS vs SVN vs GIT

    - by user3215
    CVS is being used in my workplace and I've no much knowledge of cvs other than installing and creating cvs users and I heard developers share their project with eclipse or something like that. I'm asked to check for best repositories which offers advanced features giving the hints SVN and GIT. If any one using these repositories please short list their features and if possible with links of good installation guides and a bit information of what the eclipse to do with these repositories. Thank you!

    Read the article

  • Nails vs Screws (C# List vs Dictionary)

    - by MarkPearl
    General This may sound like a typical noob statement, but I’m finding out in a very real way that just because you have a solution to a problem, doesn’t necessarily mean it is the best solution. This was reiterated to me when a friend of mine suggested I look at using Dictionaries instead of Lists for a particular problem – he was right, I have always just assumed that because lists solved my problem I did not need to look elsewhere. So my new manifesto to counter this ageless problem is as follows… Look for a solution that will logically work Once you have a solution look for possible alternatives Decide why your current solution is the best approach compared to the alternatives If it is.. use it till something better comes along, if it isnt…. change What’s the difference between Lists & Dictionaries Both lists and dictionaries are used to store collections of data. Assume we had the following declarations… var dic = new Dictionary<string, long>(); var lst = new List<long>(); long data;   With a list, you simply add the item to the list and it will add the item to the end of the list. lst.Add(data); With a dictionary, you need to specify some sort of key and the data you want to add so that it can be uniquely identified. dic.Add(uniquekey, data);   Because with a dictionary you now have unique identifier, in the background they provide all sort’s of optimized algorithms to find your associated data. What this means is that if you are wanting to access your data it is a lot faster than a List. So when is it appropriate to use either class? For me, if I can guarantee that each item in my collection will have a unique identifier, then I will use Dictionaries instead of Lists as there is a considerable performance benefit when accessing each data item. If I cannot make this sort of guarantee, then by default I will use a list. I know this is all really basic, and I hope I haven’t missed some fundamental principle… If anyone would like to add their 2 cents, please feel free to do so…

    Read the article

  • Next steps for Asp.Net C# developer (RoR vs Python Django vs PhP Drupal)

    - by ProfessorB
    A majority of my web development experience has been on the .Net stack (mainly Asp.net C#). I am looking to learn something new in my spare time, for the use of personal projects and possibly for use professionally (as an ISV). I know some Python, done some scripting with it in the past, nothing on the web though. Php has been around for a long time and RoR has gained a lot of popularity. Are there any developers from the .NET world that have migrated over to one or more of the other platforms? If so, which do you prefer and why? Which would you suggest and why?

    Read the article

  • H1 vs H2 vs Other for website title/logo and SEO

    - by Ilian Iliev
    It is a common practice for front-end developers to put the website title or logo in H1 tag and the title in H2. But most of the time the title of the page/article is more important because it caries the content value. So my question is what is the best approac from semantic and seo viewpoint. Examples: logo - H1, title - H1 logo - H1, title - H2 logo - H2, title - H1 logo - other tag, title - H1 Provided other variants if you think they will have bigger effect.

    Read the article

  • contractor vs full time employee

    - by Victor
    What is the long term career prospect of a contractor/consultant in IT field vs a full time salaried employee? The usual arguments aside: Stability vs more upfront money;paid leaves vs tax savings;less paperwork vs more freedom;stagnation vs changing job environments etc etc etc Can some one with a long career experience in hopefully both sides of the divide comment on the pros and cons of contracting vs being an employee? This will be beneficial for all if only people with ample experience choose to answer. Comments are always welcome though.

    Read the article

  • Entity communication: Message queue vs Publish/Subscribe vs Signal/Slots

    - by deft_code
    How do game engine entities communicate? Two use cases: How would entity_A send a take-damage message to entity_B? How would entity_A query entity_B's HP? Here's what I've encountered so far: Message queue entity_A creates a take-damage message and posts it to entity_B's message queue. entity_A creates a query-hp message and posts it to entity_B. entity_B in return creates an response-hp message and posts it to entity_A. Publish/Subscribe entity_B subscribes to take-damage messages (possibly with some preemptive filtering so only relevant message are delivered). entity_A produces take-damage message that references entity_B. entity_A subscribes to update-hp messages (possibly filtered). Every frame entity_B broadcasts update-hp messages. Signal/Slots ??? entity_A connects an update-hp slot to entity_B's update-hp signal. Something better? Do I have a correct understanding of how these communication schemes would tie into a game engine's entity system? How do entities in commercial game engines communicate?

    Read the article

  • Alt text vs CSS sprites (SEO vs speed)

    - by leeoniya
    I'm reworking our site to reduce HTTP requests and blocking requests by concatenating JS, css, gzipping, loading all JS via LABjs and using CSS sprites for images that were loaded individually via <img> tags before. Progress has been great so far - 5x page load performance improvement. However, we're in the top 5 organic search ranking in google for many targeted keywords and phrases. I'm afraid eliminating so many img tags with alt attributes can hurt our SEO. Does anyone have any experience with alt tag manip/removal and effects on SEO positions? Is previous rank "sticky"?

    Read the article

  • ASP.NET Web Forms Extensibility: Control Adapters

    - by Ricardo Peres
    All ASP.NET controls from version 2.0 can be associated with a control adapter. A control adapter is a class that inherits from ControlAdapter and it has the chance to interact with the control(s) it is targeting so as to change some of its properties or alter its output. I talked about control adapters before and they really a cool feature. The ControlAdapter class exposes virtual methods for some well known lifecycle events, OnInit, OnLoad, OnPreRender and OnUnload that closely match their Control counterparts, but are fired before them. Because the control adapter has a reference to its target Control, it can cast it to its concrete class and do something with it before its lifecycle events are actually fired. The adapter is also notified before the control is rendered (BeginRender), after their children are renderes (RenderChildren) and after itself is rendered (Render): this way the adapter can modify the control’s output. Control adapters may be specified for any class inheriting from Control, including abstract classes, web server controls and even pages. You can, for example, specify a control adapter for the WebControl and UserControl classes, but, curiously, not for Control itself. When specifying a control adapter for a page, it must inherit from PageAdapter instead of ControlAdapter. The adapter for a control, if specified, can be found on the protected Adapter property, and for a page, on the PageAdapter property. The first use of control adapters that came to my attention was for changing the output of standard ASP.NET web controls so that they were more based on CSS and less on HTML tables: it was the CSS Friendly Control Adapters project, now available at http://code.google.com/p/aspnetcontroladapters/. They are interesting because you specify them in one location and they apply anywhere a control of the target type is created. Mind you, it applies to controls declared on markup as well as controls created by code with the new operator. So, how do you use control adapters? The most usual way is through a browser definition file. In it, you specify a set of control adapters and their target controls, for a given browser. This browser definition file is a XML file with extension .Browser, and can either be global (%WINDIR%\Microsoft.NET\Framework64\vXXXX\Config\Browsers) or local to the web application, in which case, it must be placed inside the App_Browsers folder at the root of the web site. It looks like this: 1: <browsers> 2: <browser refID="Default"> 3: <controlAdapters> 4: <adapter controlType="System.Web.UI.WebControls.TextBox" adapterType="MyNamespace.TextBoxAdapter, MyAssembly" /> 5: </controlAdapters> 6: </browser> 7: </browsers> A browser definition file targets a specific browser, so you can have different definitions for Chrome, IE, Firefox, Opera, as well as for specific version of each of those (like IE8, Firefox3). Alternatively, if you set the target to Default, it will apply to all. The reason to pick a specific browser and version might be, for example, in order to circumvent some limitation present in that specific version, so that on markup you don’t need to be concerned with that. Another option is through the the current Browser object of the request: 1: this.Context.Request.Browser.Adapters.Add(typeof(TextBox).FullName, typeof(TextBoxAdapter).FullName); This must go very early on the page lifecycle, for example, on the OnPreInit event, or even on Application_Start. You have to specify the full class name for both the target control and the adapter. Of course, you have to do this for every request, because it won’t be persisted. As an example, you may know that the classic TextBox control renders an HTML input tag if its TextMode is set to SingleLine and a textarea if set to MultiLine. Because the textarea has no notion of maximum length, unlike the input, something must be done in order to enforce this. Here’s a simple suggestion: 1: public class TextBoxControlAdapter : ControlAdapter 2: { 3: protected TextBox Target 4: { 5: get 6: { 7: return (this.Control as TextBox); 8: } 9: } 10:  11: protected override void OnLoad(EventArgs e) 12: { 13: if ((this.Target.MaxLength > 0) && (this.Target.TextMode == TextBoxMode.MultiLine)) 14: { 15: if (this.Target.Page.ClientScript.IsClientScriptBlockRegistered("TextBox_KeyUp") == false) 16: { 17: if (this.Target.Page.ClientScript.IsClientScriptBlockRegistered(this.Target.Page.GetType(), "TextBox_KeyUp") == false) 18: { 19: String script = String.Concat("function TextBox_KeyUp(sender) { if (sender.value.length > ", this.Target.MaxLength, ") { sender.value = sender.value.substr(0, ", this.Target.MaxLength, "); } }\n"); 20:  21: this.Target.Page.ClientScript.RegisterClientScriptBlock(this.Target.Page.GetType(), "TextBox_KeyUp", script, true); 22: } 23:  24: this.Target.Attributes["onkeyup"] = "TextBox_KeyUp(this)"; 25: } 26: } 27: 28: base.OnLoad(e); 29: } 30: } What it does is, for every TextBox control, if it is set for multi line and has a defined maximum length, it injects some JavaScript that will filter out any content that exceeds this maximum length. This will occur for any TextBox that you may have on your site, or any class that inherits from it. You can use any of the previous options to register this adapter. Stay tuned for more ASP.NET Web Forms extensibility tips!

    Read the article

  • Workshops, online content show how Oracle infuses simplicity, mobility, extensibility into user experience

    - by mvaughan
    By Kathy Miedema & Misha Vaughan, Oracle Applications User Experience Oracle has made a huge investment into the user experience of its many different software product families, and recent releases showcase big changes and features that aim to promote end user engagement and efficiency by streamlining navigation and simplifying the user interface. But making Oracle’s enterprise software great-looking and usable doesn’t stop when Oracle products go out the door. The Applications User Experience (UX) team recognizes that our customers may need to customize software to fit their work processes. And that’s why we provide tools such as user experience design patterns to help you maintain the Oracle user experience as you tailor your application to fit your business needs. Often, however, customers may need some context around user experience. How has the Oracle user experience been designed and constructed? Why is a good user experience important for users? How does understanding what goes into the user experience benefit the people who purchase the software for users? There’s a short answer to these questions, and you can read about it on Usable Apps. But truly understanding Oracle’s investment and seeing how it applies across product families occasionally requires a deeper dive into the Oracle user experience, especially if you’re an influencer or decision-maker about Oracle products. To help frame these decisions, the Communications & Outreach team has developed several targeted workshops that explore what Oracle means when it talks about user experience, and provides a roadmap into where the Oracle user experience is going. These workshops require non-disclosure agreements, and have been delivered to Oracle sales folks, Oracle partners, Oracle ACE Directors and ACEs, and a few customers. Some of these audience members have been developers or have a technical background; just as many did not. Here’s a breakdown of the kind of training you can get around the Oracle user experience from the OAUX Communications & Outreach team.For Partners: George Papazzian, Principal, Naviscent with Joyce Ohgi, Oracle Oracle Fusion Applications HCM Pre-Sales Seminar:  In concert with Worldwide Alliances  and  Channels under Applications Partner Enablement Director Jonathan Vinoskey’s guidance, the Applications User Experience team delivers a two-day workshop.  Day one focuses on Oracle Fusion Applications HCM and pre-sales strategy, and Day two focuses on positioning and leveraging Oracle’s investment in the Oracle Fusion Applications user experience.  The next workshops will occur on the following dates: December 4-5, 2013 @ Manchester, UK January 29-30, 2014 @ Reston, Virginia February 2014 @ Guadalajara, Mexico (email: Shannon Whiteman) March 11-12, 2014 @ Dubai, United Arab Emirates April 1-2, 2014 @ Chicago, Illinois Partner Advisory Board: A two-day board meeting in the U.S. and U.K. to discuss four main user experience areas for Oracle Fusion Applications: simplicity, visualization & analytics, mobility, & futures. This event is limited to Oracle Diamond Partners, UX bloggers, and key UX influencers and requires legal documentation.  We will be talking about the Oracle applications UX strategy and roadmap. Partner Implementation Training on User Interface: How to Build Great-Looking, Usable Apps:  In this two-day, hands-on workshop built around Oracle’s Application Development Framework, learn how to build desktop and mobile user interfaces and mobile user interfaces based on Oracle’s experience with Fusion Applications. This workshop is for partners with a technology background who are looking for ways to tailor Fusion Applications using ADF, or have built their own custom solutions using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs. Nov 5-6, 2013 @ Redwood Shores, CA, USA January 28-29th, 2014 @ Reston, Virginia, USA February 25-26, 2014 @ Guadalajara, Mexico March 9-10, 2014 @ Dubai, United Arab Emirates To register, contact [email protected] Simplified UI Customization & Extensibility:  Pilot workshop:  We will be reviewing the proposed content for communicating the user experience tool kit available with the next release of Oracle Fusion Applications.  Our core focus will be on what toolkit components our system implementors and independent software vendors will need to respond to customer demand, whether they are extending Fusion Applications, or building custom applications, that will need to leverage the simplified UI. Dec 11th, 2013 @ Reading, UK For information: contact [email protected] Private lab tour and demos: Interested in seeing what’s going on in the Apps UX Labs?  If you are headed to the San Francisco Bay Area, let us know. We can arrange a spin through our usability labs at headquarters. OAUX Expo: This open-house forum gives partners a look at what the UX team is working on, and showcases the next-generation user experiences in a demo environment where attendees can see and touch the applications. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For CustomersAngela Johnston, Gozel Aamoth, Teena Singh, and Yen Chan, Oracle Lab tours: See demos of soon-to-be-released products, and take a spin on usability research equipment such as our eye-tracker. Watch this video to get an idea of what you’ll see. Get our newsletter: Learn about newly released products and see where you can meet us at user group conferences. Participate in a feedback session: Join a focus group or customer feedback session to get an early look at user experience designs for the next generation of software, and provide your thoughts on how well it will work. Join the OUAB: The Oracle Usability Advisory Board meets several times a year to discuss trends in the workforce and provide direction on user experience designs. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Developers (customers, partners, and consultants): Plinio Arbizu, SP Solutions, Richard Bingham, Oracle, Balaji Kamepalli, EiSTechnoogies, Praveen Pillalamarri, EiSTechnologies How to Build Great-Looking, Usable Apps: This workshop is for attendees with a strong technology background who are looking for ways to tailor customer software using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs.  See above for dates and times. UX design patterns web site: Cut the length of your project down by months. Use these patterns to build out the task flow you need to develop for your users. The patterns have already been usability-tested and represent the best practices that the Oracle UX research team has found in its studies. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Oracle Sales Mike Klein, Jeremy Ashley, Brent White, Oracle Contact your local sales person for more information about the Oracle user experience and the training available from the Applications User Experience Communications & Outreach team. See customer-friendly user experience collateral ranging from the new simplified UI in Oracle Fusion Applications Release 7, to E-Business Suite user experience highlights, to Siebel, PeopleSoft, and JD Edwards user experience highlights.   Receive access to the same pre-sales and implementation training we provide to partners. For Oracle Sales only: Oracle-only training on the Oracle Fusion Applications UX Innovation Sales Kit.

    Read the article

  • Fedora vs Ubuntu vs Debian to host Subversion and Bugzilla over Apache

    - by Tone
    I'm not interested in a flame war of Ubuntu vs Fedora vs Debian vs whatever. What I am interested in is whether or not I should move my current Ubuntu server to Fedora or Debian. I have been able to get Subversion setup and hosted via Apache over https and it works quite well (I'm a .NET guy so this was all new to me). I'm having trouble though with installing Bugszilla - have run into some issues getting all the perl scripts to run successfully so my questions are: 1) Will Bugszilla will install easier on Fedora or Debian? Can I just install a package instead of having to download the tar.gz file and untar it, run perl scripts, etc. 2) Is Fedora or Debian considered to be a better production server system? I have no desire for a GUI, just need it to host Subversion, Bugzilla over Apache2, and act as a file and print server for my home network.

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    - by ScottGu
    This is the twentieth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release.  Today’s blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  You’ll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions of Visual Studio. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Improved JavaScript Intellisense Providing Intellisense for a dynamic language like JavaScript is more involved than doing so with a statically typed language like VB or C#.  Correctly inferring the shape and structure of variables, methods, etc is pretty much impossible without pseudo-executing the actual code itself – since JavaScript as a language is flexible enough to dynamically modify and morph these things at runtime.  VS 2010’s JavaScript code editor now has the smarts to perform this type of pseudo-code execution as you type – which is how its intellisense completion is kept accurate and complete.  Below is a simple walkthrough that shows off how rich and flexible it is with the final release. Scenario 1: Basic Type Inference When you declare a variable in JavaScript you do not have to declare its type.  Instead, the type of the variable is based on the value assigned to it.  Because VS 2010 pseudo-executes the code within the editor, it can dynamically infer the type of a variable, and provide the appropriate code intellisense based on the value assigned to a variable. For example, notice below how VS 2010 provides statement completion for a string (because we assigned a string to the “foo” variable): If we later assign a numeric value to “foo” the statement completion (after this assignment) automatically changes to provide intellisense for a number: Scenario 2: Intellisense When Manipulating Browser Objects It is pretty common with JavaScript to manipulate the DOM of a page, as well as work against browser objects available on the client.  Previous versions of Visual Studio would provide JavaScript statement completion against the standard browser objects – but didn’t provide much help with more advanced scenarios (like creating dynamic variables and methods).  VS 2010’s pseudo-execution of code within the editor now allows us to provide rich intellisense for a much broader set of scenarios. For example, below we are using the browser’s window object to create a global variable named “bar”.  Notice how we can now get intellisense (with correct type inference for a string) with VS 2010 when we later try and use it: When we assign the “bar” variable as a number (instead of as a string) the VS 2010 intellisense engine correctly infers its type and modifies statement completion appropriately to be that of a number instead: Scenario 3: Showing Off Because VS 2010 is psudo-executing code within the editor, it is able to handle a bunch of scenarios (both practical and wacky) that you throw at it – and is still able to provide accurate type inference and intellisense. For example, below we are using a for-loop and the browser’s window object to dynamically create and name multiple dynamic variables (bar1, bar2, bar3…bar9).  Notice how the editor’s intellisense engine identifies and provides statement completion for them: Because variables added via the browser’s window object are also global variables – they also now show up in the global variable intellisense drop-down as well: Better yet – type inference is still fully supported.  So if we assign a string to a dynamically named variable we will get type inference for a string.  If we assign a number we’ll get type inference for a number.  Just for fun (and to show off!) we could adjust our for-loop to assign a string for even numbered variables (bar2, bar4, bar6, etc) and assign a number for odd numbered variables (bar1, bar3, bar5, etc): Notice above how we get statement completion for a string for the “bar2” variable.  Notice below how for “bar1” we get statement completion for a number:   This isn’t just a cool pet trick While the above example is a bit contrived, the approach of dynamically creating variables, methods and event handlers on the fly is pretty common with many Javascript libraries.  Many of the more popular libraries use these techniques to keep the size of script library downloads as small as possible.  VS 2010’s support for parsing and pseudo-executing libraries that use these techniques ensures that you get better code Intellisense out of the box when programming against them. Summary Visual Studio 2010 (and the free Visual Web Developer 2010 Express) now provide much richer JavaScript intellisense support.  This support works with pretty much all popular JavaScript libraries.  It should help provide a much better development experience when coding client-side JavaScript and enabling AJAX scenarios within your ASP.NET applications. Hope this helps, Scott P.S. You can read my previous blog post on VS 2008’s JavaScript Intellisense to learn more about our previous JavaScript intellisense (and some of the scenarios it supported).  VS 2010 obviously supports all of the scenarios previously enabled with VS 2008.

    Read the article

  • Server Vs Service / Physical Vs Virtual

    - by user559142
    When reading definitions for a server service (e.g. iis) you will often find that there are several cross references to a virtual server but none seem to definitively refer to the two as the same..... Can somebody help me to understand the differences - I cannot get my head around what the difference between each is? Ideally I would like to know the differences between the following/or indeed if any refer to the same... 1) Logical Server 2) Virtual Host 3) Logical Partition 4) Physical Server Vs Virtual Server 5) Server Service Vs Virtual Host

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • A DirectoryCatalog class for Silverlight MEF (Managed Extensibility Framework)

    - by Dixin
    In the MEF (Managed Extension Framework) for .NET, there are useful ComposablePartCatalog implementations in System.ComponentModel.Composition.dll, like: System.ComponentModel.Composition.Hosting.AggregateCatalog System.ComponentModel.Composition.Hosting.AssemblyCatalog System.ComponentModel.Composition.Hosting.DirectoryCatalog System.ComponentModel.Composition.Hosting.TypeCatalog While in Silverlight, there is a extra System.ComponentModel.Composition.Hosting.DeploymentCatalog. As a wrapper of AssemblyCatalog, it can load all assemblies in a XAP file in the web server side. Unfortunately, in silverlight there is no DirectoryCatalog to load a folder. Background There are scenarios that Silverlight application may need to load all XAP files in a folder in the web server side, for example: If the Silverlight application is extensible and supports plug-ins, there would be a /ClinetBin/Plugins/ folder in the web server, and each pluin would be an individual XAP file in the folder. In this scenario, after the application is loaded and started up, it would like to load all XAP files in /ClinetBin/Plugins/ folder. If the aplication supports themes, there would be a /ClinetBin/Themes/ folder, and each theme would be an individual XAP file too. The application would qalso need to load all XAP files in /ClinetBin/Themes/. It is useful if we have a DirectoryCatalog: DirectoryCatalog catalog = new DirectoryCatalog("/Plugins"); catalog.DownloadCompleted += (sender, e) => { }; catalog.DownloadAsync(); Obviously, the implementation of DirectoryCatalog is easy. It is just a collection of DeploymentCatalog class. Retrieve file list from a directory Of course, to retrieve file list from a web folder, the folder’s “Directory Browsing” feature must be enabled: So when the folder is requested, it responses a list of its files and folders: This is nothing but a simple HTML page: <html> <head> <title>localhost - /Folder/</title> </head> <body> <h1>localhost - /Folder/</h1> <hr> <pre> <a href="/">[To Parent Directory]</a><br> <br> 1/3/2011 7:22 PM 185 <a href="/Folder/File.txt">File.txt</a><br> 1/3/2011 7:22 PM &lt;dir&gt; <a href="/Folder/Folder/">Folder</a><br> </pre> <hr> </body> </html> For the ASP.NET Deployment Server of Visual Studio, directory browsing is enabled by default: The HTML <Body> is almost the same: <body bgcolor="white"> <h2><i>Directory Listing -- /ClientBin/</i></h2> <hr width="100%" size="1" color="silver"> <pre> <a href="/">[To Parent Directory]</a> Thursday, January 27, 2011 11:51 PM 282,538 <a href="Test.xap">Test.xap</a> Tuesday, January 04, 2011 02:06 AM &lt;dir&gt; <a href="TestFolder/">TestFolder</a> </pre> <hr width="100%" size="1" color="silver"> <b>Version Information:</b>&nbsp;ASP.NET Development Server 10.0.0.0 </body> The only difference is, IIS’s links start with slash, but here the links do not. Here one way to get the file list is read the href attributes of the links: [Pure] private IEnumerable<Uri> GetFilesFromDirectory(string html) { Contract.Requires(html != null); Contract.Ensures(Contract.Result<IEnumerable<Uri>>() != null); return new Regex( "<a href=\"(?<uriRelative>[^\"]*)\">[^<]*</a>", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant) .Matches(html) .OfType<Match>() .Where(match => match.Success) .Select(match => match.Groups["uriRelative"].Value) .Where(uriRelative => uriRelative.EndsWith(".xap", StringComparison.Ordinal)) .Select(uriRelative => { Uri baseUri = this.Uri.IsAbsoluteUri ? this.Uri : new Uri(Application.Current.Host.Source, this.Uri); uriRelative = uriRelative.StartsWith("/", StringComparison.Ordinal) ? uriRelative : (baseUri.LocalPath.EndsWith("/", StringComparison.Ordinal) ? baseUri.LocalPath + uriRelative : baseUri.LocalPath + "/" + uriRelative); return new Uri(baseUri, uriRelative); }); } Please notice the folders’ links end with a slash. They are filtered by the second Where() query. The above method can find files’ URIs from the specified IIS folder, or ASP.NET Deployment Server folder while debugging. To support other formats of file list, a constructor is needed to pass into a customized method: /// <summary> /// Initializes a new instance of the <see cref="T:System.ComponentModel.Composition.Hosting.DirectoryCatalog" /> class with <see cref="T:System.ComponentModel.Composition.Primitives.ComposablePartDefinition" /> objects based on all the XAP files in the specified directory URI. /// </summary> /// <param name="uri"> /// URI to the directory to scan for XAPs to add to the catalog. /// The URI must be absolute, or relative to <see cref="P:System.Windows.Interop.SilverlightHost.Source" />. /// </param> /// <param name="getFilesFromDirectory"> /// The method to find files' URIs in the specified directory. /// </param> public DirectoryCatalog(Uri uri, Func<string, IEnumerable<Uri>> getFilesFromDirectory) { Contract.Requires(uri != null); this._uri = uri; this._getFilesFromDirectory = getFilesFromDirectory ?? this.GetFilesFromDirectory; this._webClient = new Lazy<WebClient>(() => new WebClient()); // Initializes other members. } When the getFilesFromDirectory parameter is null, the above GetFilesFromDirectory() method will be used as default. Download the directory’s XAP file list Now a public method can be created to start the downloading: /// <summary> /// Begins downloading the XAP files in the directory. /// </summary> public void DownloadAsync() { this.ThrowIfDisposed(); if (Interlocked.CompareExchange(ref this._state, State.DownloadStarted, State.Created) == 0) { this._webClient.Value.OpenReadCompleted += this.HandleOpenReadCompleted; this._webClient.Value.OpenReadAsync(this.Uri, this); } else { this.MutateStateOrThrow(State.DownloadCompleted, State.Initialized); this.OnDownloadCompleted(new AsyncCompletedEventArgs(null, false, this)); } } Here the HandleOpenReadCompleted() method is invoked when the file list HTML is downloaded. Download all XAP files After retrieving all files’ URIs, the next thing becomes even easier. HandleOpenReadCompleted() just uses built in DeploymentCatalog to download the XAPs, and aggregate them into one AggregateCatalog: private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { using (StreamReader reader = new StreamReader(e.Result)) { string html = reader.ReadToEnd(); IEnumerable<Uri> uris = this._getFilesFromDirectory(html); Contract.Assume(uris != null); IEnumerable<DeploymentCatalog> deploymentCatalogs = uris.Select(uri => new DeploymentCatalog(uri)); deploymentCatalogs.ForEach( deploymentCatalog => { this._aggregateCatalog.Catalogs.Add(deploymentCatalog); deploymentCatalog.DownloadCompleted += this.HandleDownloadCompleted; }); deploymentCatalogs.ForEach(deploymentCatalog => deploymentCatalog.DownloadAsync()); } } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } // Exception handling. } In HandleDownloadCompleted(), if all XAPs are downloaded without exception, OnDownloadCompleted() callback method will be invoked. private void HandleDownloadCompleted(object sender, AsyncCompletedEventArgs e) { if (Interlocked.Increment(ref this._downloaded) == this._aggregateCatalog.Catalogs.Count) { this.OnDownloadCompleted(e); } } Exception handling Whether this DirectoryCatelog can work only if the directory browsing feature is enabled. It is important to inform caller when directory cannot be browsed for XAP downloading. private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { // No exception thrown when browsing directory. Downloads the listed XAPs. } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } WebException webException = error as WebException; if (webException != null) { HttpWebResponse webResponse = webException.Response as HttpWebResponse; if (webResponse != null) { // Internally, WebClient uses WebRequest.Create() to create the WebRequest object. Here does the same thing. WebRequest request = WebRequest.Create(Application.Current.Host.Source); Contract.Assume(request != null); if (request.CreatorInstance == WebRequestCreator.ClientHttp && // Silverlight is in client HTTP handling, all HTTP status codes are supported. webResponse.StatusCode == HttpStatusCode.Forbidden) { // When directory browsing is disabled, the HTTP status code is 403 (forbidden). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_ClientHttp, webException); } else if (request.CreatorInstance == WebRequestCreator.BrowserHttp && // Silverlight is in browser HTTP handling, only 200 and 404 are supported. webResponse.StatusCode == HttpStatusCode.NotFound) { // When directory browsing is disabled, the HTTP status code is 404 (not found). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_BrowserHttp, webException); } } } this.OnDownloadCompleted(new AsyncCompletedEventArgs(error, cancelled, this)); } Please notice Silverlight 3+ application can work either in client HTTP handling, or browser HTTP handling. One difference is: In browser HTTP handling, only HTTP status code 200 (OK) and 404 (not OK, including 500, 403, etc.) are supported In client HTTP handling, all HTTP status code are supported So in above code, exceptions in 2 modes are handled differently. Conclusion Here is the whole DirectoryCatelog’s looking: Please click here to download the source code, a simple unit test is included. This is a rough implementation. And, for convenience, some design and coding are just following the built in AggregateCatalog class and Deployment class. Please feel free to modify the code, and please kindly tell me if any issue is found.

    Read the article

  • ASP.NET Web Forms Extensibility: Handler Factories

    - by Ricardo Peres
    An handler factory is the class that implements IHttpHandlerFactory and is responsible for instantiating an handler (IHttpHandler) that will process the current request. This is true for all kinds of web requests, whether they are for ASPX pages, ASMX/SVC web services, ASHX/AXD handlers, or any other kind of file. Also used for restricting access for certain file types, such as Config, Csproj, etc. Handler factories are registered on the global Web.config file, normally located at %WINDIR%\Microsoft.NET\Framework<x64>\vXXXX\Config for a given path and request type (GET, POST, HEAD, etc). This goes on section <httpHandlers>. You would create a custom handler factory for a number of reasons, let me list just two: A centralized place for using dependency injection; Also a centralized place for invoking custom methods or performing some kind of validation on all pages. Let’s see an example using Unity for injecting dependencies into a page, suppose we have this on Global.asax.cs: 1: public class Global : HttpApplication 2: { 3: internal static readonly IUnityContainer Unity = new UnityContainer(); 4: 5: void Application_Start(Object sender, EventArgs e) 6: { 7: Unity.RegisterType<IFunctionality, ConcreteFunctionality>(); 8: } 9: } We instantiate Unity and register a concrete implementation for an interface, this could/should probably go in the Web.config file. Forget about its actual definition, it’s not important. Then, we create a custom handler factory: 1: public class UnityPageHandlerFactory : PageHandlerFactory 2: { 3: public override IHttpHandler GetHandler(HttpContext context, String requestType, String virtualPath, String path) 4: { 5: IHttpHandler handler = base.GetHandler(context, requestType, virtualPath, path); 6: 7: //one scenario: inject dependencies 8: Global.Unity.BuildUp(handler.GetType(), handler, String.Empty); 9:  10: return (handler); 11: } 12: } It inherits from PageHandlerFactory, which is .NET’s included factory for building regular ASPX pages. We override the GetHandler method and issue a call to the BuildUp method, which will inject required dependencies, if any exist. An example page with dependencies might be: 1: public class SomePage : Page 2: { 3: [Dependency] 4: public IFunctionality Functionality 5: { 6: get; 7: set; 8: } 9: } Notice the DependencyAttribute, it is used by Unity to identify properties that require dependency injection. When BuildUp is called, the Functionality property (or any other properties with the DependencyAttribute attribute) will receive the concrete implementation associated with it’s type, as registered on Unity. Another example, checking a page for authorization. Let’s define an interface first: 1: public interface IRestricted 2: { 3: Boolean Check(HttpContext ctx); 4: } An a page implementing that interface: 1: public class RestrictedPage : Page, IRestricted 2: { 3: public Boolean Check(HttpContext ctx) 4: { 5: //check the context and return a value 6: return ...; 7: } 8: } For this, we would use an handler factory such as this: 1: public class RestrictedPageHandlerFactory : PageHandlerFactory 2: { 3: private static readonly IHttpHandler forbidden = new UnauthorizedHandler(); 4:  5: public override IHttpHandler GetHandler(HttpContext context, String requestType, String virtualPath, String path) 6: { 7: IHttpHandler handler = base.GetHandler(context, requestType, virtualPath, path); 8: 9: if (handler is IRestricted) 10: { 11: if ((handler as IRestricted).Check(context) == false) 12: { 13: return (forbidden); 14: } 15: } 16:  17: return (handler); 18: } 19: } 20:  21: public class UnauthorizedHandler : IHttpHandler 22: { 23: #region IHttpHandler Members 24:  25: public Boolean IsReusable 26: { 27: get { return (true); } 28: } 29:  30: public void ProcessRequest(HttpContext context) 31: { 32: context.Response.StatusCode = (Int32) HttpStatusCode.Unauthorized; 33: context.Response.ContentType = "text/plain"; 34: context.Response.Write(context.Response.Status); 35: context.Response.Flush(); 36: context.Response.Close(); 37: context.ApplicationInstance.CompleteRequest(); 38: } 39:  40: #endregion 41: } The UnauthorizedHandler is an example of an IHttpHandler that merely returns an error code to the client, but does not cause redirection to the login page, it is included merely as an example. One thing we must keep in mind is, there can be only one handler factory registered for a given path/request type (verb) tuple. A typical registration would be: 1: <httpHandlers> 2: <remove path="*.aspx" verb="*"/> 3: <add path="*.aspx" verb="*" type="MyNamespace.MyHandlerFactory, MyAssembly"/> 4: </httpHandlers> First we remove the previous registration for ASPX files, and then we register our own. And that’s it. A very useful mechanism which I use lots of times.

    Read the article

  • UK Pilot Event: Fusion Applications Release 8 Simplified UI: Extensibility & Customization of User Experience

    - by ultan o'broin
    Interested? Of course you are! But read on to understand the what, why, where, and the who and ensure this great opportunity is right for you before signing up. There will be some demand for this one, so hurry! What: A one-day workshop where Applications User Experience will preview the proposed content for communicating the user experience (UX) tool kit intended for the next release of Oracle Fusion Applications. We will walk through the content, explain our approach and tell you about our activities for communicating to partners and customers how to customize and extend their Release 8 user experiences for Oracle Fusion Applications with composers and the Oracle Application Development Framework Toolkit. When and Where: Dec. 11, 2013 @ Oracle UK in Thames Valley Park Again: This event is held in person in the UK. So ensure you can travel! Why: We are responding to Oracle partner interest about extending and customizing Simplified UIs for Release 7, and we will be use the upcoming release as our springboard for getting a powerful productivity and satisfaction message out to the Oracle ADF enterprise methodology development community, Fusion customer implementation and tailoring teams and to our Oracle partner ecosystem. This event will also be an opportunity for attendees to give Oracle feedback on the approach too, ensuring our messaging and resources meets your business needs or if there is something else needed to get up and running fast! Who: The ideal participants for this workshop are who will be involved in system implementation roles for HCM and CRM Oracle Fusion Applications Release 8, as well as seasoned ADF developers supporting Oracle Fusion Applications. And yes, Cloud is part of the agenda! How to Register: Use this URL: http://bit.ly/UXEXTUK13 If you have questions, then send them along right away to [email protected]. Deadline: Please RSVP by November 1, 2013.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >