Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 456/679 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • Exalogic updates. Enterprise Manager, Traffic Director & Virtualization

    - by JuergenKress
    Integrating Enterprise Manager 12c with Exalogic Running Oracle Traffic Director HA with Minimal Root Usage Demo: Virtualized Exalogic with Enterprise Manager WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Exalogic,Traffic Director,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Finding nuggets in ARC discussions

    - by alanc
    A bit over twenty years ago, Sun formed an Architecture Review Committee (ARC) that evaluates proposals to change interfaces between components in Sun software products. During the OpenSolaris days, we opened many of these discussions to the community. While they’re back behind closed doors, and at a different company now, we still continue to hold these reviews for the software from what’s now the Sun Systems Group division of Oracle. Recently one of these reviews was held (via e-mail discussion) to review a proposal to update our GNU findutils package to the latest upstream release. One of the upstream changes discussed was the addition of an “oldfind” program. In findutils 4.3, find was modified to use the fts() function to walk the directory tree, and oldfind was created to provide the old mechanism in case there were bugs in the new implementation that users needed to workaround. In Solaris 11 though, we still ship the find descended from SVR4 as /usr/bin/find and the GNU find is available as either /usr/bin/gfind or /usr/gnu/bin/find. This raised the discussion of if we should add oldfind, and if so what should we call it. Normally our policy is to only add the g* names for GNU commands that conflict with an existing Solaris command – for instance, we ship /usr/bin/emacs, not /usr/bin/gemacs. In this case however, that seemed like it would be more confusing to have /usr/bin/oldfind be the older version of /usr/bin/gfind not of /usr/bin/find. Thus if we shipped it, it would make more sense to call it /usr/bin/goldfind, which several ARC members noted read more naturally as “gold find” than as “g old find”. One of the concerns we often discuss in ARC is if a change is likely to be understood by users or if it will result in more calls to support. As we hit this part of the discussion on a Friday at the end of a long week, I couldn’t resist putting forth a hypothetical support call for this command: “Hello, Oracle Solaris Support, how may I help you?” “My admin is out sick, but he sent an email that he put the findutils package on our server, and I can run goldfind now. I tried it, but goldfind didn’t find gold.” “Did he get the binutils package too?” “No he just said findutils, do we need binutils?” “Well, gold comes in the binutils package, so goldfind would be able to find gold if you got that package.” “How much does Oracle charge for that package?” “It’s free for Solaris users.” “You mean Oracle ships packages of gold to customers for free?” “Yes, if you get the binutils package, it includes GNU gold.” “New gold? Is that some sort of alchemy, turning stuff into gold?” “Not new gold, gold from the GNU project.” “Oracle’s taking gold from the GNU project and shipping it to me?” “Yes, if you get binutils, that package includes gold along with the other tools from the GNU project.” “And GNU doesn’t mind Oracle taking their gold and giving it to customers?” “No, GNU is a non-profit whose goal is to share their software.” “Sharing software sure, but gold? Where does a non-profit like GNU get gold anyway?” “Oh, Google donated it to them.” “Ah! So Oracle will give me the gold that GNU got from Google!” “Yes, if you get the package from us.” “How do I get the package with the gold?” “Just run pkg install binutils and it will put it on your disk.” “We’ve got multiple disks here - which one will it put it on?” “The one with the system image - do you know which one that is? “Well the note from the admin says the system is on the first disk and the users are on the second disk.” “Okay, so it should go on the first disk then.” “And where will I find the gold?” “It will be in the /usr/bin directory.” “In the user’s bin? So thats on the second disk?” “No, it would be on the system disk, with the other development tools, like make, as, and what.” “So what’s on the first disk?” “Well if the system image is there the commands should all be there.” “All the commands? Not just what?” “Right, all the commands that come with the OS, like the shell, ps, and who.” “So who’s on the first disk too?” “Yes. Did your admin say when he’d be back?” “No, just that he had a massive headache and was going home after I tried to get him to explain this stuff to me.” “I can’t imagine why.” “Oh, is why a command too?” “No, _why was a Ruby programmer.” “Ruby? Do you give those away with the gold too?” “Yes, but it comes in the ruby package, not binutils.” “Oh, I’ll have to have my admin get that package too! Thanks!” Needless to say, we decided this might not be the best idea. Since the GNU package hasn’t had to release a serious bug fix in the new find in the past few years, the new GNU find seems pretty stable, and we always have the SVR4 find to use as a fallback in Solaris, so it didn’t seem that adding oldfind was really necessary, so we passed on including it when we update to the new findutils release. [Apologies to Abbott, Costello, their fans, and everyone who read this far. The Gold (linker) page on Wikipedia may explain some of the above, but can’t explain why goldfind is the old GNU find, but gold is the new GNU ld.]

    Read the article

  • Handy Generic JQuery Functions

    - by Steve Wilkes
    I was a bit of a late-comer to the JQuery party, but now I've been using it for a while it's given me a host of options for adding extra flair to the client side of my applications. Here's a few generic JQuery functions I've written which can be used to add some neat little features to a page. Just call any of them from a document ready function. Apply JQuery Themeroller Styles to all Page Buttons   The JQuery Themeroller is a great tool for creating a theme for a site based on colours and styles for particular page elements. The JQuery.UI library then provides a set of functions which allow you to apply styles to page elements. This function applies a JQuery Themeroller style to all the buttons on a page - as well as any elements which have a button class applied to them - and then makes the mouse pointer turn into a cursor when you mouse over them: function addCursorPointerToButtons() {     $("button, input[type='submit'], input[type='button'], .button") .button().css("cursor", "pointer"); } Automatically Remove the Default Value from a Select Box   Required drop-down select boxes often have a default option which reads 'Please select...' (or something like that), but once someone has selected a value, there's no need to retain that. This function removes the default option from any select boxes on the page which have a data-val-remove-default attribute once one of the non-default options has been chosen: function removeDefaultSelectOptionOnSelect() {     $("select[data-val-remove-default='']").change(function () {         var sel = $(this);         if (sel.val() != "") { sel.children("option[value='']:first").remove(); }     }); } Automatically add a Required Label and Stars to a Form   It's pretty standard to have a little * next to required form field elements. This function adds the text * Required to the top of the first form on the page, and adds *s to any element within the form with the class editor-label and a data-val-required attribute: function addRequiredFieldLabels() {     var elements = $(".editor-label[data-val-required='']");     if (!elements.length) { return; }     var requiredString = "<div class='editor-required-key'>* Required</div>";     var prependString = "<span class='editor-required-label'> * </span>"; var firstFormOnThePage = $("form:first");     if (!firstFormOnThePage.children('div.editor-required-key').length) {         firstFormOnThePage.prepend(requiredString);     }     elements.each(function (index, value) { var formElement = $(this);         if (!formElement.children('span.editor-required-label').length) {             formElement.prepend(prependString);         }     }); } I hope those come in handy :)

    Read the article

  • Oracle Database In-Memory Launch - Featuring Larry Ellison - June 10 - Joint the webcast!

    - by Javier Puerta
    For more than three-and-a-half decades, Oracle has defined database innovation. With our market-leading technologies, customers have been able to out-think and out-perform their competition. Soon they will be able to do that even faster. At a live launch event and simultaneous webcast, Larry Ellison will reveal the future of the database. Promote this strategic event to customers.  Watch Larry Ellison on Tuesday, June 10, 2014 19:00 – 20:30 a.m. CET  6:00 pm - 7:30 pm UK  Join the webcast here!

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • Windows 8 App Downloads Increasing + Over 5,000 Apps Available

    - by David Paquette
    Windows 8 will be unleashed on the general public tomorrow and I thought it would be a good time to review some of the numbers I have been tracking over the last month. Downloads of Windows 8 Apps have been steadily increasing over the last month.  Below is screenshot from the App Summary page for my Windows 8 app.  The blue line is my app, while the orange line is average for the top 5 apps in that subcategory.  Considering the large gap between the 2, I think it is safe to assume that my app is NOT in the top 5 in the subcategory. The spike in the last couple of days is fairly dramatic and I am a little surprised by that.  I would have expected that kind of spike on the days following the official release as opposed to the days leading up to the release.   Finally, the all important App count.  There have been some stories floating around that the Window 8 Store is a ghost town and that there are no apps available.  I think these might be exaggerating the situation a little.  As of this morning, in the US store there are over 5000 apps available for download.  Obviously a far cry from the hundreds of thousands available in other app stores, but we are seeing solid growth in this number. Less than a month ago, that number was 2000. That means the store more than doubled in less than a month. If the growth continues, it won’t be long before the Widows 8 Store is filled with all the apps you need (and a whole lot you don’t need).

    Read the article

  • Registrati Subito!

    - by Claudia Caramelli-Oracle
    Lo sapevi che i regolamenti italiani limitano le aziende nell'invio di comunicazioni via e-mail senza il tuo esplicito consenso? Iscrivendoti alle comunicazioni Oracle, potrai solo ottenere benefici! Eccoti un paio di esempi:Mantieni la tua conoscenza di Oracle sempre al top:• Rimani aggiornato sulle tecnologie Oracle con le ultime informazioni e gli annunci sui nostri prodotti e servizi • Rimani aggiornato con regolari best practice di settore e report degli analisti • Ascolta direttamente il nostro management• Ricevi inviti ad eventi locali, dove ti sarà possibile incontrare specialisti Oracle e potrai ampliare la tua rete con altri clienti Controlla i tipi di informazioni che si ricevono • Gestisci i tipi di contenuti che vuoi ricevere sottoscrivendo gli argomenti basati sul ruolo, sull'industria o sul prodotto che ti interessano • Oppure potrai sempre scegliere di disiscriverti in qualsiasi momento con il nostro "one-click unsubscribe"Registrati subito per avere il tuo account Oracle qui: https://profile.oracle.com/

    Read the article

  • MVC Razor Engine For Beginners Part 1

    - by Humprey Cogay, C|EH, E|CSA
    I. What is MVC? a. http://www.asp.net/mvc/tutorials/older-versions/overview/asp-net-mvc-overview II. Software Requirements for this tutorial a. Visual Studio 2010/2012. You can get your free copy here Microsoft Visual Studio 2012 b. MVC Framework Option 1 - Install using a standalone installer http://www.microsoft.com/en-us/download/details.aspx?id=30683 Option 2 - Install using Web Platform Installer http://www.microsoft.com/web/handlers/webpi.ashx?command=getinstallerredirect&appid=MVC4VS2010_Loc III. Creating your first MVC4 Application a. On the Visual Studio click file new solution link b. Click Other Project Type>Visual Studio Solutions and on the templates window select blank solution and let us name our solution MVCPrimer. c. Now Click File>New and select Project d. Select Visual C#>Web> and select ASP.NET MVC 4 Web Application and Enter MyWebSite as Name e. Select Empty, Razor as view engine and uncheck Create a Unit test project f. You can now view a basic MVC 4 Application Structure on your solution explorer g. Now we will add our first controller by right clicking on the controllers folder on your solution explorer and select Add>Controller h. Change the name of the controller to HomeController and under the scaffolding options select Empty MVC Controller. i. You will now see a basic controller with an Index method that returns an ActionResult j. We will now add a new View Folder for our Home Controller. Right click on the views folder on your solution explorer and select Add> New Folder> and name this folder Home k. Add a new View by right clicking on Views>Home Folder and select Add View. l. Name the view Index, and select Razor(CSHTML) as View Engine, All checkbox should be unchecked for now and click add. m. Relationship between our HomeController and Home Views Sub Folder n. Add new HTML Contents to our newly created Index View o. Press F5 to run our MVC Application p. We will create our new model, Right click on the models folder of our solution explorer and select Add> Class. q. Let us name our class Customer r. Edit the Customer class with the following code s. Open the HomeController by double clickin HomeController of our Controllers folder and edit the HomeControllerusing System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc;   namespace MyWebSite.Controllers {     public class HomeController : Controller     {         //         // GET: /Home/           public ActionResult Index()         {             return View();         }           public ActionResult ListCustomers()         {             List<Models.Customer> customers = new List<Models.Customer>();               //Add First Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 1,                         CompanyName = "Volvo",                         ContactNo = "123-0123-0001",                         ContactPerson = "Gustav Larson",                         Description = "Volvo Car Corporation, or Volvo Personvagnar AB, is a Scandinavian automobile manufacturer founded in 1927"                     });                 //Add Second Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 2,                         CompanyName = "BMW",                         ContactNo = "999-9876-9898",                         ContactPerson = "Franz Josef Popp",                         Description = "Bayerische Motoren Werke AG,  (BMW; English: Bavarian Motor Works) is a " +                                       "German automobile, motorcycle and engine manufacturing company founded in 1917. "                     });                 //Add Third Customer to Our Collection             customers.Add(new Models.Customer()             {                 Id = 3,                 CompanyName = "Audi",                 ContactNo = "983-2222-1212",                 ContactPerson = "Karl Benz",                 Description = " is a multinational division of the German manufacturer Daimler AG,"             });               return View(customers);         }     } } t. Let us now create a view for this Class, But before continuing Press Ctrl + Shift + B to rebuild the solution, this will make the previously created model on the Model class drop down of the Add View Menu. Right click on the views>Home folder and select Add>View u. Let us name our View as ListCustomers, Select Razor(CSHTML) as View Engine, Put a check mark on Create a strongly-typed view, and select Customer (MyWebSite.Models) as model class. Slect List on the Scaffold Template and Click OK. v. Run the MVC Application by pressing F5, and on the address bar insert Home/ListCustomers, We should now see a web page similar below.   x. You can edit ListCustomers.CSHTML to remove and add HTML codes @model IEnumerable<MyWebSite.Models.Customer>   @{     Layout = null; }   <!DOCTYPE html>   <html> <head>     <meta name="viewport" content="width=device-width" />     <title>ListCustomers</title> </head> <body>     <h2>List of Customers</h2>     <table border="1">         <tr>             <th>                 @Html.DisplayNameFor(model => model.CompanyName)             </th>             <th>                 @Html.DisplayNameFor(model => model.Description)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactPerson)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactNo)             </th>         </tr>         @foreach (var item in Model) {         <tr>             <td>                 @Html.DisplayFor(modelItem => item.CompanyName)             </td>             <td>                 @Html.DisplayFor(modelItem => item.Description)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactPerson)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactNo)             </td>                   </tr>     }         </table> </body> </html> y. Press F5 to run the MVC Application   z. You will notice some @HTML.DisplayFor codes. These are called HTML Helpers you can read more about HTML Helpers on this site http://www.w3schools.com/aspnet/mvc_htmlhelpers.asp   That’s all. You now have your first MVC4 Razor Engine Web Application . . .

    Read the article

  • JSR-107 Early Draft Released

    - by rob.misek
    After nearly 12 years the early draft of JSR-107 has been released. Brian Oliver, co-spec lead, details this update including information on the source, resourcing and the JCP 2.7 process. Check out Brian's update here. "Yesterday the JCP made the important step of posting the Early Draft specification and API for JSR107. [...]While an enormous amount of progress was made last year and early this year (by many people – not so much me) the JSR was somewhat delayed while the legals were resolved, especially with respect to ensuring clean and clear IP for Java itself, the eventual JCache Providers and the community.   Thankfully this stage is complete and we can move forward."

    Read the article

  • New Whitepaper - Exalogic Virtualization Architecture

    - by Javier Puerta
    One of the key enhancements in the current generation of Oracle Exalogic systems—and the focus of this whitepaper—is Oracle’s incorporation of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology to permit the system to share the internal InfiniBand network and storage fabric between as many as 63 virtual machines per physical server node with near-native performance simultaneously allowing both high performance and high workload consolidation. Download it here: An Oracle White Paper - November 2012- Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads

    Read the article

  • New Sample Demonstrating the Traversing of Tree Bindings

    - by Duncan Mills
    A technique that I seem to use a fair amount, particularly in the construction of dynamic UIs is the use of a ADF Tree Binding to encode a multi-level master-detail relationship which is then expressed in the UI in some kind of looping form – usually a series of nested af:iterators, rather than the conventional tree or treetable. This technique exploits two features of the treebinding. First the fact that an treebinding can return both a collectionModel as well as a treeModel, this collectionModel can be used directly by an iterator. Secondly that the “rows” returned by the collectionModel themselves contain an attribute called .children. This attribute in turn gives access to a collection of all the children of that node which can also be iterated over. Putting this together you can represent the data encoded into a tree binding in all sorts of ways. As an example I’ve put together a very simple sample based on the HT schema and uploaded it to the ADF Sample project. It produces this UI: The important code is shown here for a Region -> Country -> Location Hierachy: <af:iterator id="i1" value="#{bindings.AllRegions.collectionModel}" var="rgn"> <af:showDetailHeader text="#{rgn.RegionName}" disclosed="true" id="sdh1"> <af:iterator id="i2" value="#{rgn.children}" var="cnty">     <af:showDetailHeader text="#{cnty.CountryName}" disclosed="true" id="sdh2">       <af:iterator id="i3" value="#{cnty.children}" var="loc">         <af:panelList id="pl1">         <af:outputText value="#{loc.City}" id="ot3"/>           </af:panelList>         </af:iterator>       </af:showDetailHeader>     </af:iterator>   </af:showDetailHeader> </af:iterator>  You can download the entire sample from here:

    Read the article

  • The Bing Sting - an alternative opinion

    - by Charles Young
    I know I'm a bit of an MS fanboy at times, but please, am I missing something here? Microsoft, with permission of users, exploits clickstream data gathered by observing user behaviour. One use for this data is to improve Bing queries. Google equips twenty of its engineers with laptops and installs the widgets required to provide Microsoft with clickstream data. It then gets their engineers to repeatedly (I assume) type in 'synthetic' queries which bring back 'doctored' hits. It asks its engineers to then click these results (think about this!). So, the behaviour of the engineers is observed and the resulting clickstream data goes off to Microsoft. It is processed and 'improves' Bing results accordingly.   What exactly did Microsoft do wrong here?   Google's so-called 'Bing sting' is clearly a very effective attack from a propaganda perspective, but is poor practice from a company that claims to do no evil. Generating and sending clickstream data deliberately so that you can then subsequently claim that your competitor 'copied' that data from you is neither fair nor reasonable, and suggests to me a degree of desperation in the face of real competition.   Monopolies are undesirable, whether they are Microsoft monopolies or Google monopolies.    Personally, I'm glad Microsoft has technology in place to observe user behaviour (with permission, of course) and improve their search results using such data. I can only assume Google doesn't implement similar capabilities. Sounds to me as if, at least in this respect, Microsoft may offer the better technology.

    Read the article

  • Validate if aTextBox Value Start with a Specific Letter

    - by Vincent Maverick Durano
    In case you will be working on a page that needs to validate the first character of the TextBox entered by a user then here are two options that you can use: Option 1: Using an array   1: <asp:Content ID="Content1" ContentPlaceHolderID="HeadContent" runat="server"> 2: <script type="text/javascript"> 3: function CheckFirstChar(o) { 4: var arr = ['A', 'B', 'C', 'D']; 5: if (o.value.length > 0) { 6: for (var i = 0; i < arr.length; i++) { 7: if (o.value.charAt(0) == arr[i]) { 8: alert('Valid'); 9: return true; 10: } 11: else { 12: alert('InValid'); 13: return false; 14: } 15: } 16: } 17: } 18: </script> 19: </asp:Content> 20: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 21: <asp:TextBox ID="TextBox1" runat="server" onblur="return CheckFirstChar(this);"></asp:TextBox> 22: </asp:Content>   The example above uses an array of string for storing the list of  characters that a TextBox value should start with. We then iterate to the array and compare the first character of TextBox value to see if it matches any characters from the array. Option 2: Using Regular Expression (Preferred way)   1: <asp:Content ID="Content1" ContentPlaceHolderID="HeadContent" runat="server"> 2: <script type="text/javascript"> 3: function CheckFirstChar(o) { 4: pattern = /^(A|B|C|D)/; 5: if (!pattern.test(o.value)) { 6: alert('InValid'); 7: return false; 8: } else { 9: alert('Valid'); 10: return true; 11: } 12: } 13: </script> 14: </asp:Content> 15: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 16: <asp:TextBox ID="TextBox1" runat="server" onblur="return CheckFirstChar(this);"></asp:TextBox> 17: </asp:Content>   The example above uses regular expression with the pattern  /^(A|B|C|D)/. This will check if the TextBox value starts with A,B,C or D. Please note that it's case sensitive. If you want to allow lower case then you can alter the patter to this /^(A|B|C|D)/i. The i in the last part will cause a case-insensitive search.   That's it! I hope someone find this post useful!

    Read the article

  • Visual Studio 2010 Professional now on Dreamspark!

    - by Stacy Vicknair
    If you are a student and you were looking for your VS2010 fix today, be sure to check out Dreamspark.com and get your own copy! Dreamspark is simple; it’s about giving students Microsoft professional tools at no charge. Visit Dreamspark right now to sign up and get VS2010!   Technorati Tags: VS2010,Dreamspark,students,.NET

    Read the article

  • The Social Enterprise: Gangnam Style

    - by Mike Stiles
    Are only small and medium businesses able to put social strategies in place, generate consistent, compelling content for customers, and be nimble enough to listen and respond to the social communities they build? Or are enterprise organizations eagerly and effectively adopting social as well? It depends on whom inside the organization you ask. A study from Attensity looked at who “gets” social inside enterprise organizations. The results were unsurprising. Mostly, Generation X and Y employees who came of age with social as part of their lives and as a key communications vehicle understand it. Imagine being a 25-year-old at a company that bans employees from accessing Facebook at work. You may as well tell them they can’t use phones and must do all calculations on an abacus. To them, such policy is absent of real-world logic and signals to them the organization is destined to be the victim of an up-and-comer. After that, it’s senior management that gets social. You don’t get to be in senior management without reading a few things and paying attention. Most senior managers are well aware of the impact social has had and will have, though they may be unsure of what to do about it. The better ones will utilize those on the inside who do inherently know how to communicate and build virtual relationships using social. The very best will get the past out of the way for these social innovators, so the new communications can be enacted minus counterproductive dictums, double-clutching, meeting-creep, and all the other fading internal practices that water down content and impede change. Organizationally, the Attensity study found 81% of enterprise companies believe failing to embrace social will result in their being left behind. Yet our old friend fear still has many captive in its clutches. 79% feel overwhelmed by the volume of social data available, something a social technology partner with goal-oriented analytics expertise could go a long way toward alleviating. Then there’s the fear of social having a negative impact. This comes from a lack of belief in the product, the customer service, or both. The public uses social not to go out and slay brands. They’re using it to be honest. If the fear is that honesty will reflect badly on the brand, the brand has much bigger, broader problems than what happens on Facebook. Sadly, most enterprise organizations still see social as a megaphone, a one-way channel with which to hit people with ads. They either don’t understand social relationships, or don’t want any. The truly unenlightened manager will always say, “We help them by selling them our stuff.” “Brand affinity” is a term, it’s just not one assigned much value in enterprise organizations. Which brings us to Psy, the Korean performer whose Internet video phenom “Gangnam Style,” as of this writing, has been viewed 438,550,238 times on YouTube. It’s bigger than anything a brand will probably ever publish. Most brands would never have seen the point of making or publishing it. But a funny thing happened on the way to Internet success. The video literally doubled the stock price of Psy’s father’s software firm. NH Investment and Securities said, "The positive sentiment has attracted investors just because of the fact the company is owned by Psy's father and uncle.” The company wasn’t mentioned or seen in the video in any way, yet reaped tangible rewards just for being tangentially associated with it. Imagine your brand being visibly and directly responsible for such a smash and tell me it’s worthless. When enterprise organizations embrace the value of igniting passions, making people happier, solving their problems, informing them, helping them have fun, etc., then they will have fully embraced social, and will reap the brand affinity rewards of heightened awareness, brand loyalty and yes, sales.

    Read the article

  • ODEE Green Field (Windows) Part 3 - SOA Suite

    - by AndyL-Oracle
     So you're still here, are you? I'm sure you're probably overjoyed at the prospect of continuing with our green field installation of ODEE. In my previous post, I covered the installation of WebLogic - you probably noticed, like I did, that it's a pretty quick install. I'm pretty certain this had everything to do with how quickly the next post made it to the internet! So let's dig in. Make sure you've followed the steps from the initial post to obtain the necessary software and prerequisites! Unpack the RCU (Repository Creation Utility). This ZIP file contains a directory (rcuHome) that should be extracted into your ORACLE_HOME. Run the RCU – execute rcuHome/bin/rcu.bat. Click Next. Select Create and click Next. Enter the database connection details and click Next – any failure to connection will show in the Messages box. Click Ok Expand and select the SOA Infrastructure item. This will automatically select additional required components. You can change the prefix used, but DEV is recommended. If you are creating a sandbox that includes additional components like WebCenter Content and UMS, you may select those schemas as well but they are not required for a basic ODEE installation. Click Next. Click OK. Specify the password for the schema(s). Then click Next. Click Next. Click OK. Click OK. Click Create. Click Close. Unpack the SOA Suite installation files into a single directory e.g. SOA. Run the installer – navigate and execute SOA/Disk1/setup.exe. If you receive a JDK error, switch to a command line to start the installer. To start the installer via command line, do Start?Run?cmd and cd into the SOA\Disk1 directory. Run setup.exe –jreLoc < pathtoJRE >. Ensure you do not use a path with spaces – use the ~1 notation as necessary (your directory must not exceed 8 characters so “Program Files” becomes “Progra~1” and “Program Files (x86)” becomes “Progra~2” in this notation). Click Next. Select Skip and click Next. Resolve any issues shown and click Next. Verify your oracle home locations. Defaults are recommended. Click Next. Select your application server. If you’ve already installed WebLogic, this should be automatically selected for you. Click Next. Click Install. Allow the installation to progress… Click Next. Click Finish. You can save the installation details if you want. That should keep you satisfied for the moment. Get ready, because the next posts are going to be meaty! 

    Read the article

  • Rules for Naming

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Naming Documents (or is it “Document, Naming”?) Tis but thy name that is my enemy; Thou art thyself, though not a Montague. What's Montague? It is nor hand, nor foot, Nor arm, nor face, nor any other part Belonging to a man. O, be some other name! What's in a name? That which we call a rose By any other name would smell as sweet; So Romeo would, were he not Romeo call'd, Retain that dear perfection which he owes Without that title. Romeo, doff thy name And for that name which is no part of thee Take all myself.  Shakespeare – Romeo and Juliet Act II, Scene 2 We normally only use the bold portion of the famous Shakespearean quote above, but it is really out of context. As the play unfolds, we learn that a name is all too powerful. Indeed it is because of their names that the doomed lovers die. There might be life and death in a name (BTW, when I wrote this monogram, I was in Hatfield, PA. Remember the Hatfields and the McCoys?) This is a bit extreme, but in the field of Knowledge Management (KM) names are of the utmost importance as well. When I write an article about managing SharePoint sites, how should I name it? “Managing a site” or “Site, managing”? Nine times out of ten I’d opt for the latter. Almost everything we do is “Managing” so to make life easier for a person looking for meaningful content, we title our articles starting with the differentiator rather than the common factor. As a rule of thumb, we start the name with the noun rather than the verb. It is not what we do that is the primary key; it is what we do it to. So, answer this – is it a “rule of thumb” or a “thumb rule?” This is tough. A lot of what we do when naming is a judgment call. Both thumb and rule are nouns, albeit concrete and abstract (more about this later), but to most people “thumb rule” is meaningless while “rule of thumb” is an idiom. The difference between knowledge and information is that knowledge is meaningful information placed in context. Thus I elect the “rule of thumb”. It is the more meaningful title. Abstract and Concrete are relative terms. Many nouns (and verbs) that are abstract to a commoner, are concrete to a practitioner of one profession or another and may even have different concrete meanings in different professional jargons. Think about “running”. To an executive it means running a business, to a marathoner its meaning is much more literal. Generally speaking, we store and disseminate knowledge within a practice more than we do it in general. Even dictionaries encyclopedias define terms as they apply to different audiences. The rule of thumb is to put the more concrete first, but within the audience’s jargon. Even the title of this monogram is a question. Do I name it “Naming Documents” or “Documents, Naming”? Well, my own rule of thumb (“Here he goes again!?”) states that the latter is better because it starts with a noun, but this is a document about naming more than it about documents. The rules of naming also apply to graphs and charts, excel spreadsheets, and so on. Thus, I vote for the former.  A better title could have been “Naming Objects” only the word “Object” is a bit too abstract. How about just “Naming” or “Naming, rules of”? You get the drift. One of the ways to resolve all of this is to store the documents in Knowledge-Bases, which may become the subjects of a future punditry. Knowledge bases use keywords to describe their content.  Use a Metadata store for the keywords to at least attempt some common grounds. Here is another general rule (rule of thumb?!!) – put at least the one keyword in the title. Use subtitles. Here is an example: Migrating documents – Screening, cleaning, and organizing our knowledge. The main keyword is “documents”, next is “migrating”, other keywords also appear in the subtitle. They are “screening”, “cleaning”, and “organizing”. Any questions? Send me an amply named document by email: [email protected]

    Read the article

  • Installing Oracle 11g SOA Suite?

    - by asantaga
    Are you working for an SI like Accenture or Cap Gemini? Are you a sales consultant who needs to install software quickly??? Well I’m sure if your reading this you probably are.. Anyway if your like me, and like many tecchies reading manuals isn't natural to us, we’ll download the software, try to install it and then… ultimately fail.. or take a lot longer than it should..  However never fear help is here! For Oracle 11g SOA Suite (ps3) a good friend of mine , a SOA 11g PM in the states, has written a document, a quick start and its on OTN.. Although the document is PS3 focused, apart from the download URLs its also totally applicable for PS4 too. The document can be found at this link

    Read the article

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • Do you have an address column somewhere in your database?

    - by shay.shmeltzer
    Do you have an address column somewhere in your database? If the answer is yes - then the Web seminar I'm going to run together with the guys from Navteq on May 26th, might be of interest to you. You see, we all have geographical related information in our database, but many of us don't actually use this to do any geographical type of operations/representation with it. Well once you attend the "Add Maps to Your Java Applications - the Easy Way" seminar this might change. In the seminar we'll give you a quick overview of the Spatial related capabilities of the Oracle DB, Middleware and tools. And a demo showing you how easy it is to actually get data to show up on a map in your application and to interact with it. So register today, and mark your calendar.

    Read the article

  • Customer Experience in the Year Ahead

    - by Christina McKeon
    With 2012 coming to an end soon, we find ourselves reflecting on the year behind us and the year ahead. Now is a good time for reflection on your customer experience initiatives to see how far you have come and where you need to go. Looking back on your customer experience efforts this year, were you able to accomplish the following? Customer journey mapping Align processes across the entire customer lifecyle (buying and owning) Connect all functional areas to the same customer data Deliver consistent and personal experiences across all customer touchpoints Make it easy and rewarding to be your customer Hire and develop talent that drives better customer experiences Tie key performance indicators (KPIs) to each of your customer experience objectives This is by no means a complete checklist for your customer experience strategy, but it does help you determine if you have moved in the right direction for delivering great customer experiences. If you are just getting started with customer experience planning or were not able to get to everything on your list this year, consider focusing on customer journey mapping in 2013. This exercise really helps your organization put your customer in the center and understand how everything you do affects that customer. At Oracle, we see organizations in various stages of customer experience maturity all learn a lot when they go through journey mapping. Companies just starting out with customer experience get a complete understanding of what it is like to be a customer and how everything they do affects that customer. And, organizations that are further along with customer experience often find journey mapping helps provide perspective when re-visiting their customer experience strategy. Happy holidays and best wishes for delivering great customer journeys in 2013!

    Read the article

  • Using Recursive SQL and XML trick to PIVOT(OK, concat) a "Document Folder Structure Relationship" table, works like MySQL GROUP_CONCAT

    - by Kevin Shyr
    I'm in the process of building out a Data Warehouse and encountered this issue along the way.In the environment, there is a table that stores all the folders with the individual level.  For example, if a document is created here:{App Path}\Level 1\Level 2\Level 3\{document}, then the DocumentFolder table would look like this:IDID_ParentFolderName1NULLLevel 121Level 232Level 3To my understanding, the table was built so that:Each proposal can have multiple documents stored at various locationsDifferent users working on the proposal will have different access level to the folder; if one user is assigned access to a folder level, she/he can see all the sub folders and their content.Now we understand from an application point of view why this table was built this way.  But you can quickly see the pain this causes the report writer to show a document link on the report.  I wasn't surprised to find the report query had 5 self outer joins, which is at the mercy of nobody creating a document that is buried 6 levels deep, and not to mention the degradation in performance.With the help of 2 posts (at the end of this post), I was able to come up with this solution:Use recursive SQL to build out the folder pathUse SQL XML trick to concat the strings.Code (a reminder, I built this code in a stored procedure.  If you copy the syntax into a simple query window and execute, you'll get an incorrect syntax error) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} -- Get all folders and group them by the original DocumentFolderID in PTSDocument table;WITH DocFoldersByDocFolderID(PTSDocumentFolderID_Original, PTSDocumentFolderID_Parent, sDocumentFolder, nLevel)AS (-- first member      SELECT 'PTSDocumentFolderID_Original' = d1.PTSDocumentFolderID            , PTSDocumentFolderID_Parent            , 'sDocumentFolder' = sName            , 'nLevel' = CONVERT(INT, 1000000)      FROM (SELECT DISTINCT PTSDocumentFolderID                  FROM dbo.PTSDocument_DY WITH(READPAST)            ) AS d1            INNER JOIN dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)                  ON d1.PTSDocumentFolderID = df1.PTSDocumentFolderID      UNION ALL      -- recursive      SELECT ddf1.PTSDocumentFolderID_Original            , df1.PTSDocumentFolderID_Parent            , 'sDocumentFolder' = df1.sName            , 'nLevel' = ddf1.nLevel - 1      FROM dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)            INNER JOIN DocFoldersByDocFolderID AS ddf1                  ON df1.PTSDocumentFolderID = ddf1.PTSDocumentFolderID_Parent)-- Flatten out folder path, DocFolderSingleByDocFolderID(PTSDocumentFolderID_Original, sDocumentFolder)AS (SELECT dfbdf.PTSDocumentFolderID_Original            , 'sDocumentFolder' = STUFF((SELECT '\' + sDocumentFolder                                         FROM DocFoldersByDocFolderID                                         WHERE (PTSDocumentFolderID_Original = dfbdf.PTSDocumentFolderID_Original)                                         ORDER BY PTSDocumentFolderID_Original, nLevel                                         FOR XML PATH ('')),1,1,'')      FROM DocFoldersByDocFolderID AS dfbdf      GROUP BY dfbdf.PTSDocumentFolderID_Original) And voila, I use the second CTE to join back to my original query (which is now a CTE for Source as we can now use MERGE to do INSERT and UPDATE at the same time).Each part of this solution would not solve the problem by itself because:If I don't use recursion, I cannot build out the path properly.  If I use the XML trick only, then I don't have the originating folder ID info that I need to link to the document.If I don't use the XML trick, then I don't have one row per document to show in the report.I could conceivably do this in the report function, but I'd rather not deal with the beginning or ending backslash and how to attach the document name.PIVOT doesn't do strings and UNPIVOT runs into the same problem as the above.I'm excited that each version of SQL server provides us new tools to solve old problems and/or enables us to solve problems in a more elegant wayThe 2 posts that helped me along:Recursive Queries Using Common Table ExpressionHow to use GROUP BY to concatenate strings in SQL server?

    Read the article

  • ICAM Webcast Replay and slides

    - by Darin Pendergraft
    On October 10, 2012 Derrick Harcey and I co-presented on how Oracle IDM helps customers address the guidelines of Identity Credential Access Management, from a Federal (FICAM) and a State (SICAM) perspective. If you missed the webcast, here is a link to the replay:  webcast replay link. Derrick did a nice job reviewing the various ICAM components and architectures, and then invited me to provide additional detail on the Oracle technology stack.  He then closed by mapping the ICAM architectures to various components of the Oracle IDM platform. Icam oracle-webcast-2012-10-10 from OracleIDM The next webcast in the Secure Government Training Series, Safeguarding Government Cyberspace will be held Wednesday, November 28th.

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >