Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 331/679 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • SCSF for Visual Studio 2010

    - by Anthony Trudeau
    The Smart Client Software Factory (SCSF) for Visual Studio 2010 was uploaded tonight.  You can get it, the source code, and the documentation on the patterns & practices page. Note: Do not forget to "unblock" the documentation (CHM) file after you download it.  To unblock it right click the file, choose Properties, and click the Unblock button.

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • Diving into Scala with Cay Horstmann

    - by Janice J. Heiss
    A new interview with Java Champion Cay Horstmann, now up on otn/java, titled  "Diving into Scala: A Conversation with Java Champion Cay Horstmann," explores Horstmann's ideas about Scala as reflected in his much lauded new book,  Scala for the Impatient.  None other than Martin Odersky, the inventor of Scala, called it "a joy to read" and the "best introduction to Scala". Odersky was so enthused by the book that he asked Horstmann if the first section could be made available as a free download on the Typesafe Website, something Horstmann graciously assented to. Horstmann acknowledges that some aspects of Scala are very complex, but he encourages developers to simply stay away from those parts of the language. He points to several ways Java developers can benefit from Scala: "For example," he says, " you can write classes with less boilerplate, file and XML handling is more concise, and you can replace tedious loops over collections with more elegant constructs. Typically, programmers at this level report that they write about half the number of lines of code in Scala that they would in Java, and that's nothing to sneeze at. Another entry point can be if you want to use a Scala-based framework such as Akka or Play; you can use these with Java, but the Scala API is more enjoyable. " Horstmann observes that developers can do fine with Scala without grasping the theory behind it. He argues that most of us learn best through examples and not through trying to comprehend abstract theories. He also believes that Scala is the most attractive choice for developers who want to move beyond Java and C++.  When asked about other choices, he comments: "Clojure is pretty nice, but I found its Lisp syntax a bit off-putting, and it seems very focused on software transactional memory, which isn't all that useful to me. And it's not statically typed. I wanted to like Groovy, but it really bothers me that the semantics seems under-defined and in flux. And it's not statically typed. Yes, there is Groovy++, but that's in even sketchier shape. There are a couple of contenders such as Kotlin and Ceylon, but so far they aren't real. So, if you want to do work with a statically typed language on the JVM that exists today, Scala is simply the pragmatic choice. It's a good thing that it's such a nice choice." Learn more about Scala by going to the interview here.

    Read the article

  • Coherence 3.7.1 Released

    - by JuergenKress
    Oracle Coherence 3.7.1 introduces REST API, exalogic infiniband integration, improved data access performance due to more efficient in-memory and disk-based storage, and query explain plan support and much more, download now! View the webcast: Unbeatable Performance for your Cloud Application Foundation. To download Coherence 3.7.1 please visit OTN. Coherence Screencasts: Coherence 3.7.1 – Extend Only Keys Coherence 3.7.1 – REST Support Coherence 3.7.1 – POF Object Identities and References Coherence 3.7.1 – POF Annotation Support Coherence 3.7.1 – Query Explain Plan For more information please visit the Oracle Coherence Knowledge Base For regular Coherence information become a member in the WebLogic Partner Community please first login at http://partner.oracle.com and then visit: http://www.oracle.com/partners/goto/wls-emea Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Coherence,Coherence 3.7.1,Oracle,WebLogic,J2EE caching,OPN,Jürgen Kress

    Read the article

  • Insurers Pushed to Transform Their Business

    - by Calvin Glenn
    Everyone in the P&C industry has heard it “We can’t do it.” “Nobody wants to do it.” “We can’t afford to do it.”  Unfortunately, what they’re referencing are the reasons many insurers are still trying to maintain their business processing on legacy policy administration systems, attempting to bide time until there is no other recourse but to give in, bite the bullet, and take on the monumental task of replacing an entire policy administration system (PAS). Just the thought of that project sends IT, Business Users and Management reeling. However, is that fear real?  It is a bit daunting when one realizes that a complete policy administration system replacement will touch most every function an insurer manages, from quoting and rating, to underwriting, distribution, and even customer service. With that, everyone has heard at least one horror story around a transformation initiative that has far exceeded budget and the promised implementation / go-live timeline.    But, does it have to be that hard?  Surely, in the age where a person can voice-activate their DVR to record a TV program from a cell phone, there has to be someone somewhere who’s figured out how to simplify this process. To be able to help insurers, of all sizes, transform and grow their business while also delivering on their overall objectives of providing speed to market, straight-through-processing for applications, quoting, underwriting, and simplified product development. Maybe we’re looking too hard and the answer is simple and straight-forward. Why replace the entire machine when all it really needs is a new part…a single enterprise rating system? This core, modular piece of the policy administration system is the foundation of product development and rate management that enables insurers to provide the right product at the right price to the right customer through the best channels at any given moment in time. The real benefit of a single enterprise rating system is the ability to deliver enhanced business capabilities, such as improved product management, streamlined underwriting, and speed to market. With these benefits, carriers have accomplished a portion of their overall transformation goal. Furthermore, lessons learned from the rating project can be applied to the bigger, down-the-road PAS project to support the successful completion of the overall transformation endeavor. At the recent Oracle OpenWorld Conference in San Francisco, information was shared with attendees about a recent “go-live” project from an Oracle Insurance Tier 1 insurer who did what is proposed above…replaced just the rating portion of their legacy policy administration system with Oracle Insurance Insbridge Rating and Underwriting.  This change provided the insurer greater flexibility to set rates that better reflect risk while enabling the company to support its market segment strategy. Using the Oracle Insurance Insbridge enterprise rating solution, the insurer was able to reduce processing time for agents and underwriters, gained the ability to support proprietary rating models and improved pricing accuracy.      There is mounting pressure on P&C insurers to produce growth and show net profitability in the midst of modest overall industry growth, large weather-related losses and intensifying competition for market share.  Insurers are also being asked to improve customer service, offer a differentiated value proposition and simplify insurance processes.  While the demands are many there is an easy answer…invest in and update the most mission critical application in your arsenal, the single enterprise rating system. Download the Podcast to listen to “Stand-Alone Rating Engine - Leading Force Behind Core Transformation Projects in the P&C Market,” a podcast originally recorded in October 2013. Related Resources: White Paper: Stand-Alone Rating Engine: Leading Force Behind Core Transformation Projects in the P&C Market Webcast On Demand: Stand-Alone Rating Engine and Core Transformation for P&C Insurers Don’t forget to keep up with us year-round: Facebook: www.facebook.com/oracleinsurance Twitter: www.twitter.com/oracleinsurance YouTube: www.youtube.com/oracleinsurance

    Read the article

  • Two things I learned this week...

    - by noreply(at)blogger.com (Thomas Kyte)
    I often say "I learn something new about Oracle every day".  It really is true - there is so much to know about it, it is hard to keep up sometimes.Here are the two new things I learned - the first is regarding temporary tablespaces.  In the past - when people have asked "how can I shrink my temporary tablespace" I've said "create a new one that is smaller, alter your database/users to use this new one by default, wait a bit, drop the old one".  Actually I usually said first - "don't, it'll just grow again" but some people really wanted to make it smaller.Now, there is an easier way:http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_3002.htm#SQLRF53578Using alter tablespace temp shrink space .The second thing is just a little sqlplus quirk that I probably knew at one point but totally forgot.  People run into problems with &'s in sqlplus all of the time as sqlplus tries to substitute in for an &variable.  So, if they try to select '&hello world' from dual - they'll get:ops$tkyte%ORA11GR2> select '&hello world' from dual;Enter value for hello: old   1: select '&hello world' from dualnew   1: select ' world' from dual'WORLD------ worldops$tkyte%ORA11GR2> One solution is to "set define off" to disable the substitution (or set define to some other character).  Another oft quoted solution is to use chr(38) - select chr(38)||'hello world' from dual.  I never liked that one personally.  Today - I was shown another wayhttps://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4549764300346084350#4573022300346189787 ops$tkyte%ORA11GR2> select '&' || 'hello world' from dual;'&'||'HELLOW------------&hello worldops$tkyte%ORA11GR2>just concatenate '&' to the string, sqlplus doesn't touch that one!  I like that better than chr(38) (but a little less than set define off....)

    Read the article

  • First Impressions of a MacBook (from a PC guy)

    - by dgreen
    Disclaimer: I've been a PC guy my entire working career. I'd probably characterize myself as a power user. Never afraid to bust out the console line. But working with a Mac is totally foreign to me. So for those Mac guys who are curious, this is how your world appears from the outside to a computer literate person :)My Macbook Air has arrived! And it's a thing of beauty:First, the specs: 13" MacBook Air, 2.0GHz Core i7 processor. Upgraded to 8GB of RAM for an additional $100, SSD flash storage  = 256GB. The plan is ultimately to use this baby for some iOS development but also some decent lifting in Windows with Visual Studio. Done a lot of reading  and between VMWare Fusion, Parallels and Bootcamp...I'm going to go with VMWare Fusion for $49.99And now my impressions (please re-read disclaimer before proceeding!):I open the box and am trying to understand exactly how the magsafe connector works (and how to disconnect it).  Why does it have two socket outlet plugs? Who knows.  I feel like Hansel in Zoolander. The files are "in" the computer.Stuck in my external hard drive (usb). So how do I get to the files? To the Googles!Argh...it can't read my external NTFS drive. Fat32 can't support field over 4GB…problematic since some of my existing VMWare image files are much larger than 4GB. Didn't see this coming.Three year old loves iPhoto. Super easy to use. Don't even know what I'm doing but I've already (accidentally) discovered the image filtering options. Fun stuff.First thing I downloaded ever => Chrome. I need something to ground me, something familiar. My token, if you will (sorry, gratuitous Inception joke).Ok, I get it… Finder == windows explorer. But where is my hierarchical structure? I miss the tree :(On that note, yeah…how do I see what "path" my files reside in? I'm afraid to know the answer. You know what scares more though…this notion of a smart folder. Feel like the godfather - just get the job done, I don't care how you handle it, I don't want to know...just get it done. What the hell is AirDrop?Mail…just worked. Still in shock that they have a free client for yahoo mail (please no yahoo jokes).mail -> deleting a message takes 5 seconds. Have they heard of async?"Command" key instead of "Control" ok, then what the $%&^! is the control key for then"aliases" == shortcuts I thinkI don't see the file system. And I'm scared. All these things I'm downloading…these .dmg files (bad name) where are they going? Can't seem to delete when they're doneUgh...realized need to buy a mini-to-vga adaptor if I want to use my external monitor ($13 on ebay, $39 in apple store).Windows docking is trickiest for me…this notion of detached windows with a menu bar at the top. I don't like this paradigm, it's confusing. But maybe because I've been using Windows for too long.Evernote, Dropbox desktop clients seem almost identical…few quirks here and there I need to get used to.iTunes is still a bit gross. In a weird way it's actually worse on a Mac if thats possible. This is not the MacBook's fault…this is a software design issue. Overall: UI will take some getting used to. Can't decide if this represents the future and I'm stuck in the past…or this is the past and I've been spoiled by the future (which would be Windows…don't be hating I happen to be very productive in Win7)  So there you go - my 90 minute first impression of the MacBook universe.

    Read the article

  • Oracle EMEA News Digest - May 2014

    - by Steve Walker
    Systems Oracle introduced a technology preview of an OpenStack® distribution that allows Oracle Linux and Oracle VM users to work with the open source cloud software. This provides customers with additional choices and interoperability while taking advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. The distribution is delivered as part of the Oracle Linux and Oracle VM Premier Support offerings, at no additional cost. Oracle plans to work further with the OpenStack community to develop and enhance its enterprise-class capabilities to meet customer demands. Also in the Open Source arena, Oracle announced the general availability of MySQL Fabric. MySQL Fabric provides an integrated system that makes it simpler to manage groups of MySQL databases. It delivers both high availability - via failure detection and failover - and scalability through automated data sharding. Oracle Database, Middleware and Technology The company made two announcements for Oracle Tuxedo, the #1 application server for C, C++, COBOL and Java deployments in private cloud or traditional data center environments. With enhanced management and monitoring features and tighter integration with Oracle technologies, the latest release of Oracle Tuxedo 12c enables organizations to dramatically increase application throughput, while reducing total cost of ownership and time to market for new application development and deployment. Oracle also introduced the latest release of its mainframe application rehosting platform, Oracle Tuxedo ART 12c, to help organizations speed up migration projects and accelerate the adoption of the new environment by current IT staff. It enables organizations to accelerate the rehosting of IBM mainframe applications and greatly enhance management and supportability of the rehosted applications while reducing costs and risk. Applications According to new Oracle studies, B2B and B2C commerce professionals find integrated, omni-channel customer experiences increasingly valuable to their organizations, and are continuing to invest in technologies and digital content strategies to facilitate them. The studies—one for B2B and one for B2C—surveyed e-commerce professionals in business and technology departments from around the world. Although the priorities, success metrics, and technology investments differed between the two groups, customer acquisition and retention emerged as common themes across B2B and B2C. Growing market share and enhancing customer experience are cited as top investment areas for all e-commerce professionals. In product news, Oracle announced the latest release of Oracle Business Intelligence (BI) Applications (version 11.1.1.8.1, in case anyone asks). It includes prebuilt connectors between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards. Additionally, a new Oracle Human Resources Analytics module for developing and maintaining a skilled workforce has been introduced. In use at more than 4,000 companies worldwide, Oracle BI Applications support leading enterprise applications, including Oracle E-Business Suite, Oracle’s PeopleSoft, Oracle's Siebel CRM, Oracle’s JD Edwards EnterpriseOne offering high-performing analytics at a lower cost. Industries For the Communications Industry, Oracle has launched a new release of the Oracle Communications Core Session Manager. This gives CSPs a new way to design, deploy and manage complex networking services and embrace next-generation technology, It provides them with an immediate entry point for  network function virtualization (NFV) efforts, allowing them to realize immediate benefits associated with network virtualization – including increased service agility and improved network resource sharing. And for the Utilities Industry, Oracle is releasing solutions with new business features and enhanced technical architecture that help position utilities for success now and into the future. Oracle has provided new releases for its customer information system,  meter data management system, customer self-service solution and mobile workforce management solution.

    Read the article

  • Do you know that every user story should have an owner?

    - by Martin Hinshelwood
    When you are building complicated software and working with customers it is always nice for them to have some idea on who to speak to about a particular story during a sprint. In order to achieve this one of the Team takes responsibility for “looking after” a story. They will collect all of the “Done” emails and make sure that everyone follows the Done criteria identified by the team as well as answering any Product Owner queries. Figure: Bad example, The product owner is not sure who to speak to. Figure: Good example, The product owner can now see who he should speak to an developers know where to send done emails.   Technorati Tags: SSW,Scrum,SSW Rules,Rules to better Scrum with TFS

    Read the article

  • Dallas First Regionals 2012&ndash; For Inspiration and Recognition of Science and Technology

    - by T
    Wow!  That is all I have to say after the last 3 days. Three full fun filled days in a world that fed the geek, sparked the competitor, inspired the humanitarian, encouraged the inventor, and continuously warmed my heart.  As part of the Dallas First Regionals, I was awed by incredible students who teach as much as they learn, inventive and truly caring mentors that make mentoring look easy, and completely passionate and dedicated volunteers that bring meaning to giving all that you have and making events fun and safe for all.  If you have any interest in innovation, robotics, or highly motivated students, I can’t recommend anything any higher than visiting a First Robotics event. This is my third year with First and I was honored enough to serve the Dallas First Regionals as both a Web Site evaluator and as the East Field Volunteer Coordinator.  This was also the first year my daughter volunteered with me.  My daughter and I both recognize how different the First program is from other team events.  The difference with First is that everyone is a first class citizen.   It is a difference we can feel through experiencing and observing interactions between executives, respected engineers, students, and event staff.  Even with a veracious competition, you still find a lot of cooperation and we never witnessed any belittling between teams or individuals. First Robotics coined the term “Gracious Professionalism”.   It's a way of doing things that encourages high-quality work, emphasizes the value of others, and respects individuals and the community.1 I was introduced to this term as the Volunteer Coordinator when I was preparing the volunteer instructional speech.  Through the next few days, I discovered it is truly core to everything that First does.  One of the ways First accomplishes Gracious Professionalism is by utilizing another term they came up with which is “CoopertitionTM”. At FIRST, CoopertitionTM is displaying unqualified kindness and respect in the face of fierce competition.1   One of the things I never liked about sports was the need to “destroy” the other team.  First has really found a way to integrate CoopertitionTM  into the rules so teams can be competitive and part of that competition rewards cooperation. Oh and did I mention it has ROBOTS!!!  This year it had basket ball playing Kinect connected, remote controlled, robots!  There are not words for how exciting these games are.  You HAVE to check out this years game, Rebound Rumble, on youtube or you can view live action on Dallas First Video (as of this posting, the recording haven’t posted yet but should be there soon).  There are also some images below. Whatever it is, these students get it and exemplify it and these mentors ooze with it.  I am glad that First supports these events so we can all learn and be inspired by these exceptional students and mentors.  I know that no matter how much I give, it will never compare to what I gain volunteering at First.  Even if you don’t have time to volunteer, you owe it to yourself to go check out one of these events.  See what the future holds, be inspired, be encouraged, take some knowledge, leave some knowledge and most of all, have FUN.  That is what First is all about and thanks to First, I get it.   First Dallas Regionals 2012 VIEW SLIDE SHOW DOWNLOAD ALL 1.  USFirst http://www.usfirst.org/aboutus/gracious-professionalism

    Read the article

  • The Oracle OpenWorld 2012 Call for Papers Closes April 9

    - by Kerrie Foy
    It is On! Oracle OpenWorld 2012 Call for Papers is closes April 9.   This year's OpenWorld event is September 30  - October 4, Moscone Center, San Francisco. Oracle OpenWorld is among the world’s largest industry events for good reason. It offers a vast array of learning and networking opportunities in one of the planet’s great cities.  And one of the key reasons for its popularity is the prominence of presentations by customers. If you would like to deliver a presentation based on your experience, now is the time to submit your abstract for review by the selection panel. The competition is strong: roughly 18% of entries are accepted each year from more than 3,000 submissions. Review panels are made up of experts both internal and external to Oracle. Successful submissions often (but not exclusively) focus on customer successes, how-tos, or best practices. http://www.oracle.com/openworld/call-for-papers/information/index.html What is in it for you? Recognition, for one thing. Accepted sessions are publicized in the content catalog, which goes live in mid-June, and sessions given by external speakers often prove the most popular. Plus, accepted speakers get a complimentary pass to Oracle OpenWorld with access to all sessions and networking events- that could save you up to $2,595! Be sure designate your session for inclusion in the correct track by selecting  “APPLICATIONS: Product Lifecycle Management from the Primary Track drop down menu. Looking forward to seeing you at this year's OpenWorld!

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer

    - by Elton Stoneman
    This is the third in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer As the patterns get further from the simple .NET full-trust consumer, all that changes is the communication protocol and the authentication mechanism. In Part 3 the scenario is that we still have a secure .NET environment consuming our service, so we can store shared keys securely, but the runtime environment is locked down so we can't use Microsoft.ServiceBus to get the nice WCF relay bindings. To support this we will expose a RESTful endpoint through the Azure Service Bus, and require the consumer to send a security token with each HTTP service request. Pattern applicability This is a good fit for scenarios where: the runtime environment is secure enough to keep shared secrets the consumer can execute custom code, including building HTTP requests with custom headers the consumer cannot use the Azure SDK assemblies the service may need to know who is consuming it the service does not need to know who the end-user is Note there isn't actually a .NET requirement here. By exposing the service in a REST endpoint, anything that can talk HTTP can be a consumer. We'll authenticate through ACS which also gives us REST endpoints, so the service is still accessed securely. Our real-world example would be a hosted cloud app, where we we have enough room in the app's customisation to keep the shared secret somewhere safe and to hook in some HTTP calls. We will be flowing an identity through to the on-premise service now, but it will be the service identity given to the consuming app - the end user's identity isn't flown through yet. In this post, we’ll consume the service from Part 1 in ASP.NET using the WebHttpRelayBinding. The code for Part 3 (+ Part 1) is on GitHub here: IPASBR Part 3. Authenticating and authorizing with ACS We'll follow the previous examples and add a new service identity for the namespace in ACS, so we can separate permissions for different consumers (see walkthrough in Part 1). I've named the identity partialTrustConsumer. We’ll be authenticating against ACS with an explicit HTTP call, so we need a password credential rather than a symmetric key – for a nice secure option, generate a symmetric key, copy to the clipboard, then change type to password and paste in the key: We then need to do the same as in Part 2 , add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: partialTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send As with Part 2, this sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. RESTfully exposing the on-premise service through Azure Service Bus Relay The part 3 sample code is ready to go, just put your Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files.  But to do it yourself is very simple. We already have a WebGet attribute in the service for locally making REST calls, so we are just going to add a new endpoint which uses the WebHttpRelayBinding to relay that service through Azure. It's as easy as adding this endpoint to Web.config for the service:         <endpoint address="https://sixeyed-ipasbr.servicebus.windows.net/rest"                   binding="webHttpRelayBinding"                    contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> - and adding the webHttp attribute in your endpoint behavior:           <behavior name="SharedSecret">             <webHttp/>             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="gl0xaVmlebKKJUAnpripKhr8YnLf9Neaf6LR53N8uGs="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> Where's my WSDL? The metadata story for REST is a bit less automated. In our local webHttp endpoint we've enabled WCF's built-in help, so if you navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/help - you'll see the uri format for making a GET request to the service. The format is the same over Azure, so this is where you'll be connecting: https://[your-namespace].servicebus.windows.net/rest/reverse?string=abc123 Build the service with the new endpoint, open that in a browser and you'll get an XML version of an HTTP status code - a 401 with an error message stating that you haven’t provided an authorization header: <?xml version="1.0"?><Error><Code>401</Code><Detail>MissingToken: The request contains no authorization header..TrackingId:4cb53408-646b-4163-87b9-bc2b20cdfb75_5,TimeStamp:10/3/2012 8:34:07 PM</Detail></Error> By default, the setup of your Service Bus endpoint as a relying party in ACS expects a Simple Web Token to be presented with each service request, and in the browser we're not passing one, so we can't access the service. Note that this request doesn't get anywhere near your on-premise service, Service Bus only relays requests once they've got the necessary approval from ACS. Why didn't the consumer need to get ACS authorization in Part 2? It did, but it was all done behind the scenes in the NetTcpRelayBinding. By specifying our Shared Secret credentials in the consumer, the service call is preceded by a check on ACS to see that the identity provided is a) valid, and b) allowed access to our Service Bus endpoint. By making manual HTTP requests, we need to take care of that ACS check ourselves now. We do that with a simple WebClient call to the ACS endpoint of our service; passing the shared secret credentials, we will get back an SWT: var values = new System.Collections.Specialized.NameValueCollection(); values.Add("wrap_name", "partialTrustConsumer"); //service identity name values.Add("wrap_password", "suCei7AzdXY9toVH+S47C4TVyXO/UUFzu0zZiSCp64Y="); //service identity password values.Add("wrap_scope", "http://sixeyed-ipasbr.servicebus.windows.net/"); //this is the realm of the RP in ACS var acsClient = new WebClient(); var responseBytes = acsClient.UploadValues("https://sixeyed-ipasbr-sb.accesscontrol.windows.net/WRAPv0.9/", "POST", values); rawToken = System.Text.Encoding.UTF8.GetString(responseBytes); With a little manipulation, we then attach the SWT to subsequent REST calls in the authorization header; the token contains the Send claim returned from ACS, so we will be authorized to send messages into Service Bus. Running the sample Navigate to http://localhost:2028/Sixeyed.Ipasbr.WebHttpClient/Default.cshtml, enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • CVE-2011-4862 Buffer Overflow vulnerability in Telnet

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-4862 Buffer Overflow vulnerability 7.5 Telnet Solaris 10 SPARC: 148657-01 X86: 148658-01 Solaris 11 11/11 SRU 04 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Crawling a Content Folio

    - by Kyle Hatlestad
    Content Folios in WebCenter Content allow you to assemble, track, and access a logical group of documents and/or links.  It allows you to manage them as just a list of items (simple folio) or organized as a hierarchy (advanced folio).  The built-in UI in content server allows you to work with these folios, but publishing them or consuming them externally can be a bit of a challenge.   [Read More]

    Read the article

  • Crawling a Content Folio

    - by Kyle Hatlestad
    Content Folios in WebCenter Content allow you to assemble, track, and access a logical group of documents and/or links.  It allows you to manage them as just a list of items (simple folio) or organized as a hierarchy (advanced folio).  The built-in UI in content server allows you to work with these folios, but publishing them or consuming them externally can be a bit of a challenge.   The folios themselves are actually XML files that contain the structure, attributes, and pointers to the content items.  So to publish this somewhere, such as a Site Studio page, you could perhaps use an XML parser to traverse the structure and create your output.  But XML parsers are not always the easiest or most efficient to use.  In order to more easily crawl and consume a Content Folio, Ed Bryant - Principal Sales Consultant, wrote a component to do just that.  His component adds a service which does all the work for you and returns the folio structure as a simple resultset.  So consuming and publishing that folio on a Site Studio page or in your portal using RIDC is a breeze!  For example, let's take an advanced Content Folio example like this: If we look at the native file, the XML looks like this: But if we access the folio using the new service - http://server/cs/idcplg?IdcService=FOLIO_CRAWL&dDocName=ecm008003&IsPageDebug=1 - this is what the result set looks like (using the IsPageDebug parameter). Given this as the result set, it makes it very easy to consume and repurpose that folio. You can download a copy of the sample component here. Special thanks to Ed for letting me share this component!

    Read the article

  • PeopleTools Strategy and Roadmap

    Jeff Robbins, Senior Director of PeopleTools Strategy for Oracle discusses with Cliff the highlights and key features of the most recent PeopleTools release, the benefits of the Applications Unlimited Program to PeopleTools customers, and how customers can prepare for Fusion.

    Read the article

  • Warehouse Management per Endeca: disponibili i video su Youtube

    - by Claudia Caramelli-Oracle
    12.00 Il team di gestione del prodotto WMS ha registrato quattro video sulle estensioni Warehouse Management per Endeca – il programma che gestisce in tempo reale le operazioni di magazzino. Quasi un'ora di contenuti che copre: Introduzione alle estensioni WMS per Endeca Plan and Track Fulfillment Space Utilization Labor Utilization Tutti e quattro i video possono essere trovati cliccando qui. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Refactoring Part 1 : Intuitive Investments

    - by Wes McClure
    Fear, it’s what turns maintaining applications into a nightmare.  Technology moves on, teams move on, someone is left to operate the application, what was green is now perceived brown.  Eventually the business will evolve and changes will need to be made.  The approach to those changes often dictates the long term viability of the application.  Fear of change, lack of passion and a lack of interest in understanding the domain often leads to a paranoia to do anything that doesn’t involve duct tape and bailing twine.  Don’t get me wrong, those have a place in the short term viability of a project but they don’t have a place in the long term.  Add to it “us versus them” in regards to the original team and those that maintain it, internal politics and other factors and you have a recipe for disaster.  This results in code that quickly becomes unmanageable.  Even the most clever of designs will eventually become sub optimal and debt will amount that exponentially makes changes difficult.  This is where refactoring comes in, and it’s something I’m very passionate about.  Refactoring is about improving the process whereby we make change, it’s an exponential investment in the process of change. Without it we will incur exponential complexity that halts productivity. Investments, especially in the long term, require intuition and reflection.  How can we tackle new development effectively via evolving the original design and paying off debt that has been incurred? The longer we wait to ask and answer this question, the more it will cost us.  Small requests don’t warrant big changes, but realizing when changes now will pay off in the long term, and especially in the short term, is valuable. I have done my fair share of maintaining applications and continuously refactoring as needed, but recently I’ve begun work on a project that hasn’t had much debt, if any, paid down in years.  This is the first in a series of blog posts to try to capture the process which is largely driven by intuition of smaller refactorings from other projects. Signs that refactoring could help: Testability How can decreasing test time not pay dividends? One of the first things I found was that a very important piece often takes 30+ minutes to test.  I can only imagine how much time this has cost historically, but more importantly the time it might cost in the coming weeks: I estimate at least 10-20 hours per person!  This is simply unacceptable for almost any situation.  As it turns out, about 6 hours of working with this part of the application and I was able to cut the time down to under 30 seconds!  In less than the lost time of one week, I was able to fix the problem for all future weeks! If we can’t test fast then we can’t change fast, nor with confidence. Code is used by end users and it’s also used by developers, consider your own needs in terms of the code base.  Adding logic to enable/disable features during testing can help decouple parts of an application and lead to massive improvements.  What exactly is so wrong about test code in real code?  Often, these become features for operators and sometimes end users.  If you cannot run an integration test within a test runner in your IDE, it’s time to refactor. Readability Are variables named meaningfully via a ubiquitous language? Is the code segmented functionally or behaviorally so as to minimize the complexity of any one area? Are aspects properly segmented to avoid confusion (security, logging, transactions, translations, dependency management etc) Is the code declarative (what) or imperative (how)?  What matters, not how.  LINQ is a great abstraction of the what, not how, of collection manipulation.  The Reactive framework is a great example of the what, not how, of managing streams of data. Are constants abstracted and named, or are they just inline? Do people constantly bitch about the code/design? If the code is hard to understand, it will be hard to change with confidence.  It’s a large undertaking if the original designers didn’t pay much attention to readability and as such will never be done to “completion.”  Make sure not to go over board, instead use this as you change an application, not in lieu of changes (like with testability). Complexity Simplicity will never be achieved, it’s highly subjective.  That said, a lot of code can be significantly simplified, tidy it up as you go.  Refactoring will often converge upon a simplification step after enough time, keep an eye out for this. Understandability In the process of changing code, one often gains a better understanding of it.  Refactoring code is a good way to learn how it works.  However, it’s usually best in combination with other reasons, in effect killing two birds with one stone.  Often this is done when readability is poor, in which case understandability is usually poor as well.  In the large undertaking we are making with this legacy application, we will be replacing it.  Therefore, understanding all of its features is important and this refactoring technique will come in very handy. Unused code How can deleting things not help? This is a freebie in refactoring, it’s very easy to detect with modern tools, especially in statically typed languages.  We have VCS for a reason, if in doubt, delete it out (ok that was cheesy)! If you don’t know where to start when refactoring, this is an excellent starting point! Duplication Do not pray and sacrifice to the anti-duplication gods, there are excellent examples where consolidated code is a horrible idea, usually with divergent domains.  That said, mediocre developers live by copy/paste.  Other times features converge and aren’t combined.  Tools for finding similar code are great in the example of copy/paste problems.  Knowledge of the domain helps identify convergent concepts that often lead to convergent solutions and will give intuition for where to look for conceptual repetition. 80/20 and the Boy Scouts It’s often said that 80% of the time 20% of the application is used most.  These tend to be the parts that are changed.  There are also parts of the code where 80% of the time is spent changing 20% (probably for all the refactoring smells above).  I focus on these areas any time I make a change and follow the philosophy of the Boy Scout in cleaning up more than I messed up.  If I spend 2 hours changing an application, in the 20%, I’ll always spend at least 15 minutes cleaning it or nearby areas. This gives a huge productivity edge on developers that don’t. Ironically after a short period of time the 20% shrinks enough that we don’t have to spend 80% of our time there and can move on to other areas.   Refactoring is highly subjective, never attempt to refactor to completion!  Learn to be comfortable with leaving one part of the application in a better state than others.  It’s an evolution, not a revolution.  These are some simple areas to look into when making changes and can help get one started in the process.  I’ve often found that refactoring is a convergent process towards simplicity that sometimes spans a few hours but often can lead to massive simplifications over the timespan of weeks and months of regular development.

    Read the article

  • Silverlight ProgressBar issues with Binding

    - by Chris Skardon
    The ProgressBar pretty much does what it says on the tin, displays progress, in a bar form (well, by default anyhow). It’s pretty simple to use: <ProgressBar Minimum="0" Maximum="100" Value="50"/> Gives you a progress bar with 50% of it filled: Easy! But of course, we’re wanting to use binding to change the value, again, pretty easy, have a ViewModel with a ‘Value’ in it, and bind: <ProgressBar Minimum="0" Maximum="100" Value="{Binding Value}"/> Spiffy, and whilst we’re at it, why not bind the Maximum value as well – after all, we can’t be sure of the size of the progress, and it’s a pain to have to work out the percentage (when the progress bar can do it for us): <ProgressBar Minimum="0" Maximum="MaximumValue" Value="{Binding Value}"/> Right, this will work absolutely fine. Or will it??? On the face of it, it looks good, and testing it shows no issues, until at one point we go from: Maximum = 100; Value = 90; to Maximum=60; Value=50; On the face of it not unreasonable. The problem is more obvious if we look at the states of the properties after each set (initially Maximum is set at 1, Value = 0): Code Maximum Value Value < Maximum Maximum = 100; 100 0 True Value = 90; 100 90 True Maximum = 60; 60 90 False Value = 50; 60 50 True Everything is good until the Value is less than the Maximum, at this point the Progress Bar breaks. That’s right, it no longer updates itself, it will always look 100% full. The simple solution – always ensuring you set Value before Maximum is fine unless you’re using a ProgressBar in a less controlled environment – where for example you’re setting a ‘container’ with both values at the same time. The example I have is in a DataTemplate, I have a DataTemplate for a BusyIndicator, (specifically the BusyContentTemplate). The binding works this way: <BusyIndicator BusyContent="{Binding BusyContent}" BusyContentTemplate="{Binding ProgressTemplate}"/> With the template as the ProgressBar defined above… I was setting my BusyContent like this: BusyContent = content; aaaaaand finally, ‘content’ is a class: public class ContentClass : INotifyPropertyChanged { //Obviously this is properly implemented… public double Maximum { get;set;} public double Value { get;set;} } Soooo… As I was replacing the BusyContent wholesale, the order of the binding being set was outside of my control, so – how to go about it? Basically? Fudge it. Modify the ContentClass to include a method: public void Update(double value, double max) { Value = value; Maximum = max; } and change where the setting is to be: BusyContent.Update(content.Value, content.Maximum); Thereby getting the order correct.. Obvious really. Meh :|

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • Virtuelle Tour durch das Oracle Universum

    - by A&C Redaktion
    Die neue „Oracle Hardware Virtual Tour“ fürs iPhone und iPad ist eine animierte Entdeckungsreise zu verschiedenen Oracle Produkten: Man öffnet Gehäuse, findet diverse Komponenten, kann diese anschauen, drehen und herausfinden, wozu sie gut sind. Zu sehen und erfahren gibt es unter anderem Oracle Exadata, SPARC Systeme, Sun x86 Systeme, Sun Blade und Sun Netra Systeme. Sie alle treten mit dem Anspruch an, Rekorde in Sachen Performance zu brechen, einfach in der Handhabung zu sein, mit hoher Verfügbarkeit zu punkten und Kosten zu sparen. Ein verspieltes Feature – aber eines, das Partner im Kundenkontakt gewinnbringend einsetzen können. Die 3D-Apps laufen auf dem iPhone 3GS, dem iPad 2 oder neueren Geräten.

    Read the article

  • Virtuelle Tour durch das Oracle Universum

    - by A&C Redaktion
    Die neue „Oracle Hardware Virtual Tour“ fürs iPhone und iPad ist eine animierte Entdeckungsreise zu verschiedenen Oracle Produkten: Man öffnet Gehäuse, findet diverse Komponenten, kann diese anschauen, drehen und herausfinden, wozu sie gut sind. Zu sehen und erfahren gibt es unter anderem Oracle Exadata, SPARC Systeme, Sun x86 Systeme, Sun Blade und Sun Netra Systeme. Sie alle treten mit dem Anspruch an, Rekorde in Sachen Performance zu brechen, einfach in der Handhabung zu sein, mit hoher Verfügbarkeit zu punkten und Kosten zu sparen. Ein verspieltes Feature – aber eines, das Partner im Kundenkontakt gewinnbringend einsetzen können. Die 3D-Apps laufen auf dem iPhone 3GS, dem iPad 2 oder neueren Geräten.

    Read the article

  • New Feature in ODI 11.1.1.6: Enterprise Data Quality Integration

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Data Integrator 11.1.1.6.0 introduces a new Open Tool called EnterpriseDataQuality which allows ODI users to invoke an Oracle Enterprise Data Quality Job from a Package. This post will give you an overview of this new feature. Oracle Enterprise Data Quality (OEDQ) provides organizations with an integrated suite of data quality tools that offer an end-to-end solution to measure, improve, and manage the quality of data from any domain, including customer and product data. The addition of the EnterpriseDataQuality Open Tool extends the inline Data Quality capabilities of Oracle Data Integrator with Oracle Enterprise Data Quality powerful data profiling, cleansing, matching, and monitoring capabilities. The EnterpriseDataQuality Open Tool can invoke any OEDQ Job stored in a Project. This Open Tool connects to an OEDQ server using a JMX (Java Management Extensions) interface. Once installed, this Open Tool will be found under Plugins in the Package Toolbox area: This EnterpriseDataQuality Open Tool takes a couple of parameters as inputs such as the Enterprise Data Quality Job and Project names, the OEDQ hostname and JMX port etc. With the EnterpriseDataQuality Open Tool, ODI customers can now incorporate their Oracle Enterprise Data Quality processes within their Data Integration workflows. You will find instructions about how to use the Enterprise Data Quality Open Tool in the Oracle Data Integrator documentation at: Using the EnterpriseDataQuality Open Tool.You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview.

    Read the article

  • Caching of toolbar names in ArcMap

    - by Marko Apfel
    For little GUI-changes in own ArcMap-customizations normally it is enough to start the application and look that every entry is visible. Especially for refactoring of an own toolbar it was still enough to verify that the toolbar is already shown in the choice-list. But this is a fallacy: the entries there comes from a cache. You could verify this by dumping informations in the toolbar constructor. The constructor is only called if the toolbar is activated! Otherwise you see the toolbar name, but this name comes out of a cache. The cache is stored in the registry under: HKCU\Software\ESRI\ArcMap\Settings\CommandBarNameCache

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >