Search Results

Search found 15210 results on 609 pages for 'technical writing'.

Page 435/609 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • SharpDX and game engines, back to zero?

    - by Baboon
    I'm a desktop developer (I mainly do WPF for a living) but I want to make games as a hobby. So a few months ago, I started reading blogs, gamedevSE, you name it. I understand in the C++ DirectX world, you have engines such as Unity3D with designers and whatnot, but as much as I'm ok to spend months understanding game development, I'd prefer to stay with my comfy C#. So I thought I'd develop my first games in /C#/DirectX through SharpDX But then, I can't use game engines anymore, since they're made for C++DirectX and not SharpDX. (ok, I could do P/Invokes but that defeats the purpose of SharpDX). I do know about XNA, and I also do know I can't publish to the marketplace with it (and quite frankly, I don't really want to learn an API that is in jeopardy). So how do you conceal writing games in C# and using existing game engines instead of reinventing the wheel? wait for ports? So I've found out the following: After doing the first tutorials of MOgre and digging around, it seems MOgre gives you the worse of both worlds: . You can't port it with Mono because it directly references a C++/CLI dll (Ogre) and C++/CLI isn't supported by Mono. And since it's not C++ itself, you can't make a pure build. Which, as far as I understand, means I'd be stuck on windows (and not even WinRT/Metro compatible), without capability of porting anything to mobiles or other OS (Mac/Linux). Even though it looks really nice to develop with MOgre, I'd like to learn something a little more open to future broadening. On the other hand, MonoGame seems to be a rewrite of XNA with SharpDX which sounds very promising: Mono allows me to easily port my games to other platforms, mobiles included. SharpDX allows me to access the latest DirectX versions XNA would be my first choice if only MS showed some hope for the future It really looks like MonoGame is nothing else than XNA on SharpDX. Axiom looks good too but I lack info on the subject (and pages with poor design don't give me a good feeling about an API...) XNA with SunBurn looks good: It should be portable to Mono (can anyone with experience give us feedback on that?), thus multi-platform. It's then marketplace-able since Mono itself is. Did I miss something or are 2) and 4) my best options (aside from the fact Mono doesn't support any XAML)?

    Read the article

  • Session Update from IASA 2010

    - by [email protected]
    Below: Tom Kristensen, senior vice president at Marsh US Consumer, and Roger Soppe, CLU, LUTCF, senior director of insurance strategy, Oracle Insurance. Tom and Roger participated in a panel discussion on policy administration systems this week at IASA 2010. This week was the 82nd Annual IASA Educational Conference & Business Show held in Grapevine, Texas. While attending the conference, I had the pleasure of serving as a panelist in one of many of the outstanding sessions conducted this year. The session - entitled "Achieving Business Agility and Promoting Growth with a Modern Policy Administration System" - included industry experts Steve Forte from OneShield, Mike Sciole of IFG Companies, and Tom Kristensen, senior vice president at Marsh US Consumer. The session was conducted as a panel discussion and focused on how insurers can leverage best practices to mitigate risk while enabling rapid product innovation through a modern policy administration system. The panelists offered insight into business and technical challenges for both Life & Annuity and Property & Casualty carriers. The session had three primary learning objectives: Identifying how replacing a legacy system with a more modern policy administration solution can deliver agility and growth Identifying how processes and system should be re-engineered or replaced in order to improve speed-to-market and product support Uncovering how to leverage best practices to mitigate risk during a migration to a new platform Tom Kristensen, who is an industry veteran with over 20 years of experience, was able was able to offer a unique perspective as a business process outsourcer (BPO). Marsh US Consumer is currently implementing both the Oracle Insurance Policy Administration solution and the Oracle Revenue Management and Billing platform while at the same time implementing a new BPO customer. Tom offered insight on the need to replace their aging systems and Marsh's ability to drive new products and processes with a modern solution. As a best practice, their current project has empowered their business users to play a major role in both the requirements gathering and configuration phases. Tom stated that working with a modern solution has also enabled his organization to use a more agile implementation methodology and get hands-on experience with the software earlier in the project. He also indicated that Marsh was encouraged by how quickly it will be able to implement new products, which is another major advantage of a modern rules-based system. One of the more interesting issues was raised by an audience member who asked, "With all the vendor solutions available in North American and across Europe, what is going to make some of them more successful than others and help ensure their long term success?" Panelist Mike Sciole, IFG Companies suggested that carriers do their due diligence and follow a structured evaluation process focusing on vendors who demonstrate they have the "cash to invest in long term R&D" and evaluate audited annual statements for verification. Other panelists suggested that the vendor space will continue to evolve and those with a strong strategy focused on the insurance industry and a solid roadmap will likely separate themselves from the rest. The session concluded with the panelists offering advice about not being afraid to evaluate new modern systems. While migrating to a new platform can be challenging and is typically only undertaken every 15+ years by carriers, the ability to rapidly deploy and manage new products, create consistent processes to better service customers, and the ability to manage their business more effectively, transparently and securely are well worth the effort. Roger A.Soppe, CLU, LUTCF, is the Senior Director of Insurance Strategy, Oracle Insurance.

    Read the article

  • NoSQL is not about object databases

    - by Bertrand Le Roy
    NoSQL as a movement is an interesting beast. I kinda like that it’s negatively defined (I happen to belong myself to at least one other such a-community). It’s not in its roots about proposing one specific new silver bullet to kill an old problem. it’s about challenging the consensus. Actually, blindly and systematically replacing relational databases with object databases would just replace one set of issues with another. No, the point is to recognize that relational databases are not a universal answer -although they have been used as one for so long- and recognize instead that there’s a whole spectrum of data storage solutions out there. Why is it so hard to recognize, by the way? You are already using some of those other data storage solutions every day. Let me cite a few: The file system Active Directory XML / JSON documents The Web e-mail Logs Excel files EXIF blobs in your photos Relational databases And yes, object databases It’s just a fact of modern life. Notice by the way that most of the data that you use every day is unstructured and thus mostly unsuitable for relational storage. It really is more a matter of recognizing it: you are already doing NoSQL. So what happens when for any reason you need to simultaneously query two or more of these heterogeneous data stores? Well, you build an index of sorts combining them, and that’s what you query instead. Of course, there’s not much distance to travel from that to realizing that querying is better done when completely separated from storage. So why am I writing about this today? Well, that’s something I’ve been giving lots of thought, on and off, over the last ten years. When I built my first CMS all that time ago, one of the main problems my customers were facing was to manage and make sense of the mountain of unstructured data that was constituting most of their business. The central entity of that system was the file system because we were dealing with lots of Word documents, PDFs, OCR’d articles, photos and static web pages. We could have stored all that in SQL Server. It would have worked. Ew. I’m so glad we didn’t. Today, I’m working on Orchard (another CMS ;). It’s a pretty young project but already one of the questions we get the most is how to integrate existing data. One of the ideas I’ll be trying hard to sell to the rest of the team in the next few months is to completely split the querying from the storage. Not only does this provide great opportunities for performance optimizations, it gives you homogeneous access to heterogeneous and existing data sources. For free.

    Read the article

  • ASP.NET MVC 3 Client-Side Validation Summary with jQuery Validation (Unobtrusive JavaScript)

    - by Soe Tun
    When we were working with ASP.NET MVC 2, we needed to write our own JavaScript to get Client-Side Validation Summary with jQuery Validation plugin. I am one of those unfortunate people still stuck with .NET Framework Runtime 2.0 and .NET Framework 3.5; meaning I am still on ASP.NET MVC 2. So I will still keep on supporting by answering any question you may have with my original code.   Long awaited ASP.NET MVC 3 has been released, and it supports Client Side Validation Summary with jQuery out-of-the-box with new features like Unobtrusive JavaScript.   1. _Layout.cshtml Template Notice that I am using Protocol Relative URLs ( i.e., '//'.  Not 'http://' or 'https://' ) to reference script files and css files and you should use it too like that! However, please note that IE7 and IE8 will download the CSS files twice so use it with judgement. <!DOCTYPE html> <html> <head> <title>@ViewBag.Title</title> <link href="@Url.Content("~/Assets/Site.css")" rel="stylesheet" type="text/css" /> <link href="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.9/themes/redmond/jquery-ui.css" rel="stylesheet" type="text/css" media="screen" /> </head> <body> @RenderBody() <script src="//ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js" type="text/javascript"></script> <script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.9/jquery-ui.min.js" type="text/javascript"></script> <script src="//ajax.microsoft.com/ajax/jQuery.Validate/1.7/jQuery.Validate.min.js" type="text/javascript"></script> <script src="//ajax.aspnetcdn.com/ajax/mvc/3.0/jquery.validate.unobtrusive.min.js" type="text/javascript"></script> </body> </html>   2. MVC View Template There are 3 things you *must* do exactly to get Client Side Validation Summary working. (1)  You must declare your Validation Summary **inside** the `Html.BeginForm()` block like below. (2)  You must pass `excludePropertyErrors: false` to the  Html.ValidationSummary()  method. @using (Html.BeginForm()) { @Html.ValidationSummary(false, "Please fix these errors."); <!-- The rest of your View Template --> }   (3)  You have to put the following two elements in the `<appSettings />` block of your Web.config file. <add key="ClientValidationEnabled" value="true"/> <add key="UnobtrusiveJavaScriptEnabled" value="true"/>   That is all you need to do.  Simple, right? I will upload a sample project for download soon.  Please let me know if you run into some issues.     P.S: Without getting into too much technical details, I just wanted to let you know what I went through to get this to work. I had to look into the ASP.NET MVC 3 RTM Source Code and the jquery.validate.unobtrusive.js source. Initially, I thought I have to hack the jquery.validate.unobtrusive.js or something to get this to work. But after digging into MVC3 RTM source, I found out how to do it.

    Read the article

  • Monogame/SharpDX - Shader parameters missing

    - by Layoric
    I am currently working on a simple game that I am building in Windows 8 using MonoGame (develop3d). I am using some shader code from a tutorial (made by Charles Humphrey) and having an issue populating a 'texture' parameter as it appears to be missing. Edit I have also tried 'Texture2D' and using it with a register(t0), still no luck I'm not well versed writing shaders, so this might be caused by a more obvious problem. I have debugged through MonoGame's Content processor to see how this shader is being parsed, all the non 'texture' parameters are there and look to be loading correctly. Edit This seems to go back to D3D compiler. Shader code below: #include "PPVertexShader.fxh" float2 lightScreenPosition; float4x4 matVP; float2 halfPixel; float SunSize; texture flare; sampler2D Scene: register(s0){ AddressU = Clamp; AddressV = Clamp; }; sampler Flare = sampler_state { Texture = (flare); AddressU = CLAMP; AddressV = CLAMP; }; float4 LightSourceMaskPS(float2 texCoord : TEXCOORD0 ) : COLOR0 { texCoord -= halfPixel; // Get the scene float4 col = 0; // Find the suns position in the world and map it to the screen space. float2 coord; float size = SunSize / 1; float2 center = lightScreenPosition; coord = .5 - (texCoord - center) / size * .5; col += (pow(tex2D(Flare,coord),2) * 1) * 2; return col * tex2D(Scene,texCoord); } technique LightSourceMask { pass p0 { VertexShader = compile vs_4_0 VertexShaderFunction(); PixelShader = compile ps_4_0 LightSourceMaskPS(); } } I've removed default values as they are currently not support in MonoGame and also changed ps and vs to v4 instead of 2. Could this be causing the issue? As I debug through 'DXConstantBufferData' constructor (from within the MonoGameContentProcessing project) I find that the 'flare' parameter does not exist. All others seem to be getting created fine. Any help would be appreciated. Update 1 I have discovered that SharpDX D3D compiler is what seems to be ignoring this parameter (perhaps by design?). The ConstantBufferDescription.VariableCount seems to be not counting the texture variable. Update 2 SharpDX function 'GetConstantBuffer(int index)' returns the parameters (minus textures) which is making is impossible to set values to these variables within the shader. Any one know if this is normal for DX11 / Shader Model 4.0? Or am I missing something else?

    Read the article

  • Data validation best practices: how can I better construct user feedback?

    - by Cory Larson
    Data validation, whether it be domain object, form, or any other type of input validation, could theoretically be part of any development effort, no matter its size or complexity. I sometimes find myself writing informational or error messages that might seem harsh or demanding to unsuspecting users, and frankly I feel like there must be a better way to describe the validation problem to the user. I know that this topic is subjective and argumentative. I've migrated this question from StackOverflow where I originally asked it with little response. Basically, I'm looking for good resources on data validation and user feedback that results from it at a theoretical level. Topics and questions I'm interested in are: Content Should I be describing what the user did correctly or incorrectly, or simply what was expected? How much detail can the user read before they get annoyed? (e.g. Is "Username cannot exceed 20 characters." enough, or should it be described more fully, such as "The username cannot be empty, and must be at least 6 characters but cannot exceed 30 characters."?) Grammar How do I decide between phrases like "must not," "may not," or "cannot"? Delivery This can depend on the project, but how should the information be delivered to the user? Should it be obtrusive (e.g. JavaScript alerts) or friendly? Should they be displayed prominently? Immediately (i.e. without confirmation steps, etc.)? Logging Do you bother logging validation errors? Internationalization Some cultures prefer or better understand directness over subtlety and vice-versa (e.g. "Don't do that!" vs. "Please check what you've done."). How do I cater to the majority of users? I may edit this list as I think more about the topic, but I'm genuinely interested in proper user feedback techniques. I'm looking for things like research results, poll results, etc. I've developed and refined my own techniques over the years that users seem to be okay with, but I work in an environment where the users prefer to adapt to what you give them over speaking up about things they don't like. I'm interested in hearing your experiences in addition to any resources to which you may be able to point me.

    Read the article

  • Q&A: Oracle's Paul Needham on How to Defend Against Insider Attacks

    - by Troy Kitch
    Source: Database Insider Newsletter: The threat from insider attacks continues to grow. In fact, just since January 1, 2014, insider breaches have been reported by a major consumer bank, a major healthcare organization, and a range of state and local agencies, according to the Privacy Rights Clearinghouse.  We asked Paul Needham, Oracle senior director, product management, to shed light on the nature of these pernicious risks—and how organizations can best defend themselves against the threat from insider risks. Q. First, can you please define the term "insider" in this context? A. According to the CERT Insider Threat Center, a malicious insider is a current or former employee, contractor, or business partner who "has or had authorized access to an organization's network, system, or data and intentionally exceeded or misused that access in a manner that negatively affected the confidentiality, integrity, or availability of the organization's information or information systems."  Q. What has changed with regard to insider risks? A. We are actually seeing the risk of privileged insiders growing. In the latest Independent Oracle Users Group Data Security Survey, the number of organizations that had not taken steps to prevent privileged user access to sensitive information had grown from 37 percent to 42 percent. Additionally, 63 percent of respondents say that insider attacks represent a medium-to-high risk—higher than any other category except human error (by an insider, I might add). Q. What are the dangers of this type of risk? A. Insiders tend to have special insight and access into the kinds of data that are especially sensitive. Breaches can result in long-term legal issues and financial penalties. They can also damage an organization's brand in a way that directly impacts its bottom line. Finally, there is the potential loss of intellectual property, which can have serious long-term consequences because of the loss of market advantage.  Q. How can organizations protect themselves against abuse of privileged access? A. Every organization has privileged users and that will always be the case. The questions are how much access should those users have to application data stored in the database, and how can that default access be controlled? Oracle Database Vault (See image) was designed specifically for this purpose and helps protect application data against unauthorized access.  Oracle Database Vault can be used to block default privileged user access from inside the database, as well as increase security controls on the application itself. Attacks can and do come from inside the organization, and they are just as likely to come from outside as attempts to exploit a privileged account.  Using Oracle Database Vault protection, boundaries can be placed around database schemas, objects, and roles, preventing privileged account access from being exploited by hackers and insiders.  A new Oracle Database Vault capability called privilege analysis identifies privileges and roles used at runtime, which can then be audited or revoked by the security administrators to reduce the attack surface and increase the security of applications overall.  For a more comprehensive look at controlling data access and restricting privileged data in Oracle Database, download Needham's new e-book, Securing Oracle Database 12c: A Technical Primer. 

    Read the article

  • Editor's Notebook - Social Aura: Insights from the Oracle Social Media Summit

    - by user462779
    Panelists talk social marketing at the Oracle Social Media Summit On November 14, I traveled to Las Vegas for the first-ever Oracle Social Media Summit. The two day event featured an impressive collection of social media luminaries including: David Kirkpatrick (founder and CEO of Techonomy Media and author of The Facebook Effect), John Yi (Head of Marketing Partnerships, Facebook), Matt Dickman (EVP of Social Business Innovation, Weber Shandwick), and Lyndsay Iorio (Social Media & Communications Manager, NBC Sports Group) among others. It was also a great opportunity to talk shop with some of our new Vitrue and Involver colleagues who have been returning great social media results even before their companies were acquired by Oracle. I was live tweeting the event from @OracleProfit which was great for those who wanted to follow along with the proceedings from the comfort of their office or blackjack table. But I've also found over the years that live tweeting an event is a handy way to take notes: I can sift back through my record of what people said or thoughts I had at the time and organize the Twitter messages into some kind of summary account of the proceedings. I've had nearly a month to reflect on the presentations and conversations at the event and a few key topics have emerged: David Kirkpatrick's comment during the opening presentation really set the stage for the conversations that followed. Especially if you are a marketer or publisher, the idea that you are in a one-way broadcast relationship with your audience is a thing of the past. "Rising above the noise" does not mean reaching for a megaphone, ALL CAPS, or exclamation marks. Hype will not motivate social media denizens to do anything but unfollow and tune you out. But knowing your audience, creating quality content and/or offers for them, treating them with respect, and making an authentic effort to please them: that's what I believe is now necessary. And Kirkpatrick's comment early in the day really made the point. Later in the day, our friends @Vitrue demonstrated this point by elaborating on a comment by Facebook's John Yi. If a social strategy is comprised of nothing more than cutting/pasting the same message into different social media properties, you're missing the opportunity to have an actual conversation. That's not shouting at your audience, but it does feel like an empty gesture. Walter Benjamin, perplexed by auraless Twitter messages Not to get too far afield, but 20th century cultural critic Walter Benjamin has a concept that is useful for understanding the dynamics of the empty social media gesture: Aura. In his work The Work of Art in the Age of Mechanical Reproduction, Benjamin struggled to understand the difference he percieved between the value of a hand-made art object (a painting, wood cutting, sculpture, etc.) and a photograph. For Benjamin, aura is similar to the "soul" of an artwork--the intangible essence that is created when an artist picks up a tool and puts creative energy and effort into a work. I'll defer to Wikipedia: "He argues that the "sphere of authenticity is outside the technical" so that the original artwork is independent of the copy, yet through the act of reproduction something is taken from the original by changing its context. He also introduces the idea of the "aura" of a work and its absence in a reproduction." So make sure you put aura into your social interactions. Don't just mechanically reproduce them. Keeping aura in your interactions requires the intervention of an actual human being. That's why @NoahHorton's comment about content curation struck me as incredibly important. Maybe it's just my own prejudice, being in the content curation business myself. And it's not to totally discount machine-aided content management systems, content recommendation engines, and other tech-driven tools for building an exceptional content experience. It's just that without that human interaction--that editor who reviews the analytics and responds to user feedback--interactions over social media feel a bit empty. It is SOCIAL media, right? (We'll leave the conversation about social machines for another day). At the end of the day, experimentation is key. Just like trying to find that right joke to tell at the beginning of your presentation or that good opening like at a cocktail party, social media messages and interactions can take some trial and error. Don't be afraid to try things, tinker with incomplete ideas, abandon things that don't work, and engage in the conversation. And make sure your heart is in it, otherwise your audience can tell. And finally:

    Read the article

  • What micro web-framework has the lowest overhead but includes templating

    - by Simon Martin
    I want to rewrite a simple small (10 page) website and besides a contact form it could be written in pure html. It is currently built with classic asp and Dreamweaver templates. The reason I'm not simply writing 10 html pages is that I want to keep the layout all in 1 place so would need either includes or a masterpage. I don't want to use Dreamweaver templates, or batch processing (like org-mode) because I want to be able to edit using notepad (or Visual Studio) because occasionally I might need to edit a file on the server (Go Daddy's IIS admin interface will let me edit text). I don't want to use ASP.NET MVC or WebForms (which I use in my day job) because I don't need all the overhead they bring with them when essentially I'm serving up 9 static files, 1 contact form and 1 list of clubs (that I aim to use jQuery to filter). The shared hosting package I have on Go Daddy seems to take a long time to spin up when serving aspx files. Currently the clubs page is driven from an MS SQL database that I try to keep up to date by manually checking the dojo locator on the main HQ pages and editing the entries myself, this is again way over the top. I aim to get a text file with the club details (probably in JSON or xml format) and use that as the source for the clubs page. There will need to be a bit of programming for this as the HQ site is unable to provide an extract / feed so something will have to scrape the site periodically to update my clubs persistence file. I'd like that to be automated - but I'm happy to have that triggered on a visit to the clubs page so I don't need to worry about scheduling a job. I would probably have a separate process that updates the persistence that has nothing to do with the rest of the site. Ideally I'd like to use Mercurial (or git) to publish, I know Bitbucket (and github) both serve static page sites so they wouldn't work in this scenario (dynamic pages and a contact form) but that's the model I'd like to use if there is such a thing. My requirements are: Simple templating system, 1 place to define header, footers, menu etc., that can be edited using just notepad. Very minimal / lightweight framework. I don't need a monster for 10 pages Must run either on IIS7 (shared Go Daddy Windows hosting) or other free host

    Read the article

  • Dual booting 12.10 and Win 7 - boots directly to Win 7

    - by user110174
    and thank you kindly for you help! I'll preface this with saying that I realize this is a common problem, with lots of trouble-shooting guides available online; however, after multiple attempts with different guides, I've made zero progress and am hoping to someone could help me with my specific scenario. First, my story: -Initially, I installed Ubuntu 12.10 with the "Something Else" option with no problems. Used 4 GB Swap Logical Partition, 26 GB Primary Root Partition. Wanting to trying out Mint 13, I booted into Windows from GRUB2, used the latest version of EasyBCD (v2.2) to restore the Windows 7 bootloader to the MBR, deleted the Ubuntu partitions, reformatted them in NTFS. I then created a 30 GB partition of free space for Mint. I installed Mint using the same partitioning described above for Ubuntu 12.10, using /dev/sda for the boot installation files, and everything seemed to go well, until I re-booted my computer and it went straight to Windows - I could find no way to get into Mint. So I went into windows, restored windows bootloader to the MBR w/ EasyBCD, deleted partitions, etc., as I figured I'd done enough messing around and would go with Ubuntu 12.10. Now the problem: I restarted my computer booting from the same Ubuntu USB key I originally used. Briefly, "error: "prefix" is not set" flashed on screen, and instead of being greeted with the GUI menu of "try vs. install Ubuntu", there was a menu with minimal graphics (like a BIOS menu) where I could select install, run from USB, etc. After selecting "Install Ubuntu", the familiar install wizard with a GUI came up, I partitioned my drive as described, /dev/sda for the boot installation files, install went well, rebooted and...straight to Windows. This is where I'm at. Fixes I've tried: -This guide: How can I repair grub? (How to get Ubuntu back after installing Windows?) to ensure Grub is on the MBR. I followed all steps, but still when I reboot, I go directly into Windows. -Installing 12.04 instead of 12.10 - same issue -Re-installed Ubuntu, writing the boot files to their own partition, then using EasyBCD to to add a boot option for Ubuntu using the Windows bootloader, ensuring I instruct EasyBCD to look at the partition I created with the Ubuntu installer (instructions here http://neosmart.net/wiki/display/EBCD/Ubuntu). When I reboot, I select the Ubuntu option, and it puts me in GRUB4DOS, with a cursor waiting for input. I have no idea what to put here, so I would just type "reboot" to exit out. And this is where I am now. Any clue as to why I can't boot into Ubuntu? My computer specs are: ASUS UX31A Core i7, Win 7 64 Pro, 256 GB SSD, Intel HM76 Chipset and Integrated Intel HD 4000 Graphics, 4 GB memory I've tried to be as clear as possible, but I'd be happy to provide any info that would help anyone along. Thanks for your patience in reading this! Sincerely, -MN

    Read the article

  • SQLAuthority News – Online Webcast How to Identify Resource Bottlenecks – Wait Types and Queues

    - by pinaldave
    As all of you know I have been working a recently on the subject SQL Server Wait Statistics, the reason is since I have published book on this subject SQL Wait Stats Joes 2 Pros: SQL Performance Tuning Techniques Using Wait Statistics, Types & Queues [Amazon] | [Flipkart] | [Kindle], lots of question and answers I am encountering. When I was writing the book, I kept version 1 of the book in front of me. I wanted to write something which one can use right away. I wanted to create an primer for everybody who have not explored wait stats method of performance tuning. Well, the books have been very well received and in fact we ran out of huge stock 2 times in India so far and once in USA during SQLPASS. I have received so many questions on this subject that I feel I can write one more book of the same size. I have been asked if I can create videos which can go along with this book. Personally I am working with SQL Server 2012 CTP3 and there are so many new wait types, I feel the subject of wait stats is going to be very very crucial in next version of SQL Server. If you have not started learning about this subject, I suggest you at least start exploring this right now. Learn how to begin on this subject atleast as when the next version comes in, you know how to read DMVs. I will be presenting on the same subject of performance tuning by wait stats in webcast embarcadero SQL Server Community Webinar. Here are few topics which we will be covering during the webinar. Beginning with SQL Wait Stats Understanding various aspect of SQL Wait Stats Understanding Query Life Cycle Identifying three TOP wait Stats Resolution of the common 3 wait types and queues Details of the webcast: How to Identify Resource Bottlenecks – Wait Types and Queues Date and Time: Wednesday, November 2, 11:00 AM PDT Registration Link I thank embarcadero for organizing opportunity for me to share my experience on subject of wait stats and connecting me with community to further take this subject to next level. One more interesting thing, I will ask one question at the end of the webinar and I will be giving away 5 copy of my SQL Wait Stats print book to first five correct answers. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Preview Before You Paste with Live Preview in Office 2010

    - by DigitalGeekery
    Do you often find yourself frustrated that content you just copied and pasted didn’t turn out the way you expected? With the new Live Preview in Office 2010, you can preview how copied content will look when it’s pasted even between Office applications. Not every paste preview option will be available in every circumstance. The available options will be based on the applications being used and what content is copied. Copy your content like normal by right-clicking and selecting Copy, pressing Crtl + C, or selecting Copy from the Home tab. Next, select your location to paste the content. Now you can access the Paste Preview buttons either by selecting the Paste dropdown list from the Home tab…   …Or by right-clicking. As you hover your cursor over each of the Paste Options buttons, you will see a preview of what it will look like if you paste using that option. Click the corresponding button when you find the paste option you like. The “Paste” will paste all the content and formatting as you can see below. Values will paste values only, no formatting.   Formatting will paste only the formatting, no values. Hover over Paste Special to reveal any additional paste options. The process is similar in other Office applications. As you can see in the Word document below, Keep Text Only will paste the text, but not the orange color format from the original text.   Even after you’ve pasted, there is still time to change your mind. After you paste content you’ll see a Paste Option button near your content. If you don’t, you can pull it up by pressing the Ctrl key. Note: This is also available after using Ctrl + V to paste. Click to enable the dropdown and select one of the available options.   Using Live Paste Preview between multiple applications is just as easy. If we preview pasting the content from our Word document into PowerPoint by using the Keep Source Formatting option, we’ll see that the outcome looks awful. Selecting the Use Destination Theme will merge the text into the theme of the PowerPoint document and looks a lot better on our slide.   Live Paste Preview is a nice addition to Office 2010 and is sure to save time spent undoing the unexpected consequences of pasting content. Looking for more Office 2010 tips? Check out some of our other Office 2010 posts like how to create a customized tab on the Office 2010 ribbon, and how to use the streamlined printing features in Office 2010. Similar Articles Productive Geek Tips Edit Microsoft Word 2007 Documents in Print PreviewPreview Documents Without Opening Them In Word 2007How to See Where a TinyUrl Is Really Linking ToHow To Upload Office 2010 Documents to Web Apps Technical PreviewPreview Links and Images in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • Going Metro

    - by Tony Davis
    When it was announced, I confess was somewhat surprised by the striking new "Metro" User Interface for Windows 8, based on Swiss typography, Bauhaus design, tiles, touches and gestures, and the new Windows Runtime (WinRT) API on which Metro apps were to be built. It all seemed to have come out of nowhere, like field mushrooms in the night and seemed quite out-of-character for a company like Microsoft, which has hung on determinedly for over twenty years to its quaint Windowing system. Many were initially puzzled by the lack of support for plug-ins in the "Metro" version of IE10, which ships with Win8, and the apparent demise of Silverlight, Microsoft's previous 'radical new framework'. Win8 signals the end of the road for Silverlight apps in the browser, but then its importance here has been waning for some time, anyway, now that HTML5 has usurped its most compelling use case, streaming video. As Shawn Wildermuth and others have noted, if you're doing enterprise, desktop development with Silverlight then nothing much changes immediately, though it seems clear that ultimately Silverlight will die off in favor of a single WPF/XAML framework that supports those technologies that were pioneered on the phones and tablets. There is a mystery here. Is Silverlight dead, or merely repurposed? The more you look at Metro, the more it seems to resemble Silverlight. A lot of the philosophies underpinning Silverlight applications, such as the fundamentally asynchronous nature of the design, have moved wholesale into Metro, along with most the Microsoft Silverlight dev team. As Simon Cooper points out, "Silverlight developers, already used to all the principles of sandboxing and separation, will have a much easier time writing Metro apps than desktop developers". Metro certainly has given the framework formerly known as Silverlight a new purpose. It has enabled Microsoft to bestow on Windows 8 a new "duality", as both a traditional desktop OS supporting 'legacy' Windows applications, and an OS that supports a new breed of application that can share functionality such as search, that understands, and can react to, the full range of gestures and screen-sizes, and has location-awareness. It's clear that Win8 is developed in the knowledge that the 'desktop computer' will soon be a very large, tilted, touch-screen monitor. Windows owes its new-found versatility to the lessons learned from Windows Phone, but it's developed for the big screen, and with full support for familiar .NET desktop apps as well as the new Metro apps. But the old mouse-driven Windows applications will soon look very passé, just as MSDOS character-mode applications did in the nineties. Cheers, Tony.

    Read the article

  • ODUG lands DotNetNuke guru Nik Kalyani as a speaker

    - by Brian Scarbeau
    If you are in the Orlando, FL area during the first week of May then you should head over to the Orlando DotNetNuke user group meeting. Nik Kalyani will be the speaker and you will learn a great deal from him. DotNetNuke Module Development with the MVP Pattern This session focuses on introducing attendees to the Model-View-Presenter pattern, support for which was recently introduced in the DotNetNuke Core. We'll start with a quick overview of the pattern, compare it to MVC, and then dive right into code. We will start with fundamentals and then develop a full-featured module using this pattern. In order to do justice to the pattern, we will use ASP.NET WebForms controls minimally and implement most of the UI using jQuery plug-ins. Finally, to increase audience participation (both present at the meeting and remote), we will use a hackathon-style model and allow anybody, anywhere to follow along with the presentation and code their own MVP-based solution that they can share online during or after the session. A URL with full instructions for the hackathon will be posted online a few days prior to the meeting. About Our Speaker Nik Kalyani is Co-founder and Strategic Advisor for DotNetNuke Corp., the company that manages the DotNetNuke Open Source project. Kalyani is also Founder and CEO of HyperCrunch. He is a technology entrepreneur with over 18 years of experience in the software industry. He is irrationally exuberant about technology, especially if it has anything to do with the Internet. HyperCrunch is his latest startup business that builds on his knowledge and experience from prior startups, two of them venture-funded. Kalyani is a creative tinkerer perpetually interested in looking around the corner and figuring out new and interesting ways to make the world a better place. An experienced web developer, he finds the business strategy and marketing aspects of the software business more exciting than writing code. When he does create software, his primary expertise is in creating products with compelling user experiences. Kalyani is most proficient with Microsoft technologies, but has no religious fanaticism about them. Kalyani has a bachelor’s degree in computer science from Western Michigan University. He is a frequent speaker at technology conferences and user group meetings. He lives in Mountain View, California with his wife and daughters. He blogs at http://www.kalyani.com and is @techbubble on Twitter.

    Read the article

  • SQL SERVER – Sharing your ETL Resources Across Applications with Ease

    - by pinaldave
    Frequently an organization will find that the same resources are used in multiple ETL applications, for example, the same database, general purpose processing logic, or file system locations.  Creating an easy way to reuse these resources across multiple applications would increase efficiency and reduce errors.  Moreover, not every ETL developer has the same skill set, and it is likely that one developer will be more adept at writing code while another is more comfortable configuring database connections.  Real productivity gains will come when these developers are able to work independently while still making their work available to others assigned to the same project.  These are the benefits of a centralized version control system. Of course, most version control systems could be used to store and serve files, but the real need is to store and serve entire ETL applications so that each developer’s ongoing work can immediately benefit from another developer’s completed work.  In other words, the version control system needs to be tightly integrated with the tools used to develop the ETL application. The following screen shot shows such a tool. Desktop ETL tool that tightly integrates with a central version control system Developers can checkout or commit entire projects or just a single artifact.  Each artifact may be managed independently so if you need to go back to an earlier version of one artifact, changes you may have made to other artifacts are not lost.  By being tightly integrated into the graphical environment used to create and edit the project artifacts, it is extremely easy and straight-forward to move your files to and from the version control system and there is no dependency on another vendor’s version control system.  The built in version control system is optimized for managing the artifacts of ETL applications. It is equally important that the version control system supports all of the actions one typically performs such as rollbacks, locking and unlocking of files, and the ability to resolve conflicts.  Note that this particular ETL tool also has the capability to switch back and forth between multiple version control systems. It also needs to be easy to determine the status of an artifact.  Not just that it has been committed or modified, but when and by whom.  Generally you must query the version control system for this information, but having it displayed within the development environment is more desirable. Who’s ETL tool works in this fashion?  Last month I mentioned the data integration solution offered by expressor software.  The version control features I described in this post are all available in their just released expressor 3.1 Standard Edition through the integration of their expressor Studio development environment with a centralized metadata repository and version control system. You can download their Studio application, which is free, or evaluate the full Standard Edition on your own hardware.  It may be worth your time. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • BAM design pointers

    - by Kavitha Srinivasan
    In working recently with a large Oracle customer on SOA and BAM, I discovered that some BAM best practices are not quite well known as I had always assumed ! There is a doc bug out to formally incorporate those learnings but here are a few notes..  EMS-DO parity When using EMS (Enterprise Message Source) as a BAM feed, the best practice is to use one EMS to write to one Data Object. There is a possibility of collisions and duplicates when multiple EMS write to the same row of a DO at the same time. This customer had 17 EMS writing to one DO at the same time. Every sensor in their BPEL process writes to one topic but the Topic was read by 1 EMS corresponding to one sensor. They then used XSL within BAM to transform the payload into the BAM DO format. And hence for a given BPEL instance, 17 sensors fired, populated 1 JMS topic, was consumed by 17 EMS which in turn wrote to 1 DataObject.(You can image what would happen for later versions of the application that needs to send more information to BAM !).  We modified their design to use one Master XSL based on sensorname for all sensors relating to a DO- say Data Object 'Orders' and were able to thus reduce the 17 EMS to 1 with a master XSL. For those of you wondering about how squeaky clean this design is, you are right ! This is indeed not squeaky clean and that brings us to yet another 'inferred' best practice. (I try very hard not to state the obvious in my blogs with the hope that everytime I blog, it is very useful but this one is an exception.) Transformations and Calculations It is optimal to do transformations within an engine like BPEL. Not only does this provide modelling ease with a nice GUI XSL mapper in JDeveloper, the XSL engine in BPEL is quite efficient at runtime as well. And so, doing XSL transformations in BAM is not quite prudent.  The same is true for any non-trivial calculations as well. It is best to do all transformations,calcuations and sanitize the data in a BPEL or like layer and then send this to BAM (via JMS, WS etc.) This then delegates simply the function of report rendering and mechanics of real-time reporting to the Oracle BAM reporting tool which it is most suited to do. All nulls are not created equal Here is yet another possibly known fact but reiterated here. For an EMS with an Upsert operation: a) If Empty tags or tags with no value are sent like <Tag1/> or <Tag1></Tag1>, the DO will be overwritten with --null-- b) If Empty tags are suppressed ie not generated at all, the corresponding DO field will NOT be overwritten. The field will have whatever value existed previously.  For an EMS with an Insert operation, both tags with an empty value and no tags result in –null-- being written to the DO. Hope this helps .. Happy 4th!

    Read the article

  • Algorithm for spreading labels in a visually appealing and intuitive way

    - by mac
    Short version Is there a design pattern for distributing vehicle labels in a non-overlapping fashion, placing them as close as possible to the vehicle they refer to? If not, is any of the method I suggest viable? How would you implement this yourself? Extended version In the game I'm writing I have a bird-eye vision of my airborne vehicles. I also have next to each of the vehicles a small label with key-data about the vehicle. This is an actual screenshot: Now, since the vehicles could be flying at different altitudes, their icons could overlap. However I would like to never have their labels overlapping (or a label from vehicle 'A' overlap the icon of vehicle 'B'). Currently, I can detect collisions between sprites and I simply push away the offending label in a direction opposite to the otherwise-overlapped sprite. This works in most situations, but when the airspace get crowded, the label can get pushed very far away from its vehicle, even if there was an alternate "smarter" alternative. For example I get: B - label A -----------label C - label where it would be better (= label closer to the vehicle) to get: B - label label - A C - label EDIT: It also has to be considered that beside the overlapping vehicles case, there might be other configurations in which vehicles'labels could overlap (the ASCII-art examples show for example three very close vehicles in which the label of A would overlap the icon of B and C). I have two ideas on how to improve the present situation, but before spending time implementing them, I thought to turn to the community for advice (after all it seems like a "common enough problem" that a design pattern for it could exist). For what it's worth, here's the two ideas I was thinking to: Slot-isation of label space In this scenario I would divide all the screen into "slots" for the labels. Then, each vehicle would always have its label placed in the closest empty one (empty = no other sprites at that location. Spiralling search From the location of the vehicle on the screen, I would try to place the label at increasing angles and then at increasing radiuses, until a non-overlapping location is found. Something down the line of: try 0°, 10px try 10°, 10px try 20°, 10px ... try 350°, 10px try 0°, 20px try 10°, 20px ...

    Read the article

  • Looking into ASP.Net MVC 4.0 Mobile Development - part 2

    - by nikolaosk
    In this post I will be continuing my discussion on ASP.Net MVC 4.0 mobile development. You can have a look at my first post on the subject here . Make sure you read it and understand it well before you move one reading the remaining of this post. I will not be writing any code in this post. I will try to explain a few concepts related to the MVC 4.0 mobile functionality. In this post I will be looking into the Browser Overriding feature in ASP.Net MVC 4.0. By that I mean that we override the user agent for a given user session. This is very useful feature for people who visit a site through a device and they experience the mobile version of the site, but what they really want is the option to be able to switch to the desktop view. "Why they might want to do that?", you might wonder.Well first of all the users of our ASP.Net MVC 4.0 application will appreciate that they have the option to switch views while some others will think that they will enjoy more the contents of our website with the "desktop view" since the mobile device they view our site has a quite large display.  Obviously this is only one site. These are just different views that are rendered.To put it simply, browser overriding lets our application treat requests as if they were coming from a different browser rather than the one they are actually from. In order to do that programmatically we must have a look at the System.Web.WebPages namespace and the classes in it. Most specifically the class BrowserHelpers. Have a look at the picture below   In this class we see some extension methods for HttpContext class.These methods are called extensions-helpers methods and we use them to switch to one browser from another thus overriding the current/actual browser. These APIs have effect on layout,views and partial views and will not affect any other ASP.Net Request.Browser related functionality.The overridden browser is stored in a cookie. Let me explain what some of these methods do. SetOverriddenBrowser() -  let us set the user agent string to specific value GetOverriddenBrowser() -  let us get the overridden value ClearOverriddenBrowser() -  let us remove any overridden user agent for the current request   To recap, in our ASP.Net MVC 4.0 applications when our application is viewed in our mobile devices, we can have a link like "Desktop View" for all those who desperately want to see the site with in full desktop-browser version.We then can specify a browser type override. My controller class (snippet of code) that is responsible for handling the switching could be something like that. public class SwitchViewController : Controller{ public RedirectResult SwitchView(bool mobile, string returnUrl){if (Request.Browser.IsMobileDevice == mobile)HttpContext.ClearOverriddenBrowser();elseHttpContext.SetOverriddenBrowser(mobile ? BrowserOverride.Mobile : BrowserOverride.Desktop);return Redirect(returnUrl);}} Hope it helps!!!!

    Read the article

  • SQL SERVER – Speed Up! – Parallel Processes and Unparalleled Performance – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there on two different session. SQL Server Performance Tuning is a very challenging subject that requires expertise in Database Administration and Database Development. I always have enjoyed talking about SQL Server Performance tuning subject. Just like doctors I like to call my every attempt to improve the performance of SQL Server queries and database server as a practice too. I have been working with SQL Server for more than 8 years and I believe that many of the performance tuning concept I have mastered. However, performance tuning is not a simple subject. However there are occasions when I feel stumped, there are occasional when I am not sure what should be the next step. When I face situation where I cannot figure things out easily, it makes me most happy because I clearly see this as a learning opportunity. I have been presenting in TechEd India for last three years. This is my fourth time opportunity to present a technical session on SQL Server. Just like every other year, I decided to present something different, something which I have spend years of learning. This time, I am going to present about parallel processes. It is widely believed that more the CPU will improve performance of the server. It is true in many cases. However, there are cases when limiting the CPU usages have improved overall health of the server. I will be presenting on the subject of Parallel Processes and its effects. I have spent more than a year working on this subject only. After working on various queries on multi-CPU systems I have personally learned few things. In coming TechEd session, I am going to share my experience with parallel processes and performance tuning. Session Details Title: Speed Up! – Parallel Processes and Unparalleled Performance (Add to Calendar) Abstract: “More CPU More Performance” – A  very common understanding is that usage of multiple CPUs can improve the performance of the query. To get maximum performance out of any query – one has to master various aspects of the parallel processes. In this deep dive session, we will explore this complex subject with a very simple interactive demo. An attendee will walk away with proper understanding of CX_PACKET wait types, MAXDOP, parallelism threshold and various other concepts. Date and Time: March 23, 2012, 12:15 to 13:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Silverlight Cream for May 04, 2010 -- #855

    - by Dave Campbell
    In this Issue: John Papa, Adam Kinney, Mike Taulty, Kirupa, Gunnar Peipman, Mike Snow(-2-, -3-), Jesse Liberty, and Lee. Shoutout: Jeff Wilcox announced Silverlight Unit Test Framework: New version in the April 2010 Silverlight Toolkit From SilverlightCream.com: Silverlight TV 23: MVP Q&A with WWW (Wildermuth, Wahlin and Ward) John Papa has Silverlight 23 up which is a panel discussion between Shawn Wildermuth, Dan Wahlin, Ward Bell and John... wow... what a crew! Design-time Resources in Expression Blend 4 RC Adam Kinney reports on the new feature of Expresseion Blend RC to load resources at design time. Adam also has a project available to demonstrate the concepts he's explaining. Silverlight and WCF RIA Services (1 - Overview) Mike Taulty is starting a series on WCF RIA Services. This first one is an overview and looks to be a good series as expected. Introduction to Sample Data - Page 1 Kirupa has a great 5-part post up about sample data in Expression Blend. Windows Phone 7 development: Using WebBrowser control Gunnar Peipman posted about using the web browser control in WP7 to display RSS data. Good stuff, and all the code too. Silverlight Tip of the Day #10 – Converting Client IP to Geographical Location Mike Snow's Tip #10 is about taking an IP address and getting a geographical location from it. Combine this with his Tip #9 that retrieves the IP address. Silverlight Tip of the Day #11 – Deploying Silverlight Applications with WCF web services. Mike Snow's Tip #11 is much bigger than most ... it's almost an end-to-end solution for creating and deploying a WCF service, including resolving problems. Silverlight Tip of the Day #12 – Getting an Images Source File Name Mike Snow also has tip #12 up, and it's a quick one on getting the original source file name for an image you've loaded. Screen Scraping – When All You Have Is A Hammer… Jesse Liberty posted his solution to a self-imposed problem and ended up writing a 'mini tutorial on using Silverlight for creating desk-top utilities' ... all with source. RIA services and combobox lookups Lee has a post up about RIA Services and setting up comboboxes for lookups. Lots of source in the post and full project download. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Oracle Tutor: Are Documented Policies and Procedures Necessary?

    - by emily.chorba(at)oracle.com
    People refer to policies and procedures with a variety of expressions including business process documentation, standard operating procedures (SOPs), department operating procedures (DOPs), work instructions, specifications, and so on. For our purpose here, policies and procedures mean a set of documents that describe an organization's policies (rules) for operation and the procedures (containing tasks performed by individuals) to fulfill the policies. When an organization documents policies and procedures properly, they can be the strategic link between an organization's vision and its daily operations. Policies and procedures are often necessary because of some external requirement, such as environmental compliance or other governmental regulations. One example of an external requirement would be the American Sarbanes-Oxley Act, requiring full openness in accounting practices. Here are a few other examples of business issues that necessitate writing policies and procedures: Operational needs -- policies and procedures ensure fundamental processes are performed in a consistent way that meets the organization's needs. Risk management -- policies and procedures are identified by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) as a control activity needed to manage risk. Continuous improvement -- Procedures can improve processes by building important internal communication practices. Compliance -- Well-defined and documented processes (i.e. procedures, training materials) along with records that demonstrate process capability can demonstrate an effective internal control system compliant with regulations and standards. In addition to helping with the above business issues, policies and procedures can support the basic needs of employees and management. Well documented and easy to access policies and procedures: allow employees to understand their roles and responsibilities within predefined limits and to stay on the accepted path indentified by the organization's management provide clarity to the reader when dealing with accountability issues or activities that are of critical importance allow management to guide operations without constant intervention allow managers to control events in advance and prevent employees from making costly mistakes Can you think of another way organizations can meet the above needs of management and their employees in place of documented Policies and Procedures? Probably not, but we would love your feedback on this question. And that my friends, is why documented policies and procedures are very necessary. Learn MoreFor more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Emily ChorbaPrinciple Product Manager Oracle Tutor & BPM

    Read the article

  • JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

    - by John-Brown.Evans
    JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c12_5{vertical-align:top;width:468pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c8_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 0pt 5pt} .c10_5{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c14_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c21_5{background-color:#ffffff} .c18_5{color:#1155cc;text-decoration:underline} .c16_5{color:#666666;font-size:12pt} .c5_5{background-color:#f3f3f3;font-weight:bold} .c19_5{color:inherit;text-decoration:inherit} .c3_5{height:11pt;text-align:center} .c11_5{font-weight:bold} .c20_5{background-color:#00ff00} .c6_5{font-style:italic} .c4_5{height:11pt} .c17_5{background-color:#ffff00} .c0_5{direction:ltr} .c7_5{font-family:"Courier New"} .c2_5{border-collapse:collapse} .c1_5{line-height:1.0} .c13_5{background-color:#f3f3f3} .c15_5{height:0pt} .c9_5{text-align:center} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly. WebLogic Server Objects Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue Schema XSD File The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process. stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                 xmlns="http://www.example.org"                 targetNamespace="http://www.example.org"                 elementFormDefault="qualified">   <xsd:element name="exampleElement" type="xsd:string">   </xsd:element> </xsd:schema> JMS Message After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue: <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement> JDeveloper Connection You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to. 2. Create a BPEL Composite with a JMS Adapter Partner Link In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema. There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.) We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message. The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite. Create a JMS Adapter Partner Link In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterRead Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Consume Message Operation Name: Consume_message Consume Operation Parameters Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created in a previous example. JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) Messages/Message SchemaURL: We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project. Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button. Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file. Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup. Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string . Press Next and Finish, which will complete the JMS Adapter configuration.Save the project. Create a BPEL Component Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK. Wire the JMS Adapter to the BPEL Component Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist. This completes the steps at the composite level. 3 . Complete the BPEL Process Design Invoke the BPEL Flow via the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane. Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received. At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager. There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor. Add a Java Embedding Activity First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable > body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one. Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit. Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.: System.out.println("JmsAdapterReadSchema process picked up a message"); oracle.xml.parser.v2.XMLElement inputPayload =    (oracle.xml.parser.v2.XMLElement)getVariableData(                           "Receive1_Consume_Message_InputVariable",                           "body",                           "/ns2:exampleElement");   String inputString = inputPayload.getFirstChild().getNodeValue(); System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue()); Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer. This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server. 3. Test the Composite Shut Down the JmsAdapterReadSchema Composite After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0] . Press the Shut Down button to disable the composite and confirm the following popup. Monitor Messages in the JMS Queue In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail. Send a Test Message In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example “Message from JmsAdapterWriteSchema”. Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console. Monitor the SOA Server’s Output A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected. Re-Enable the JmsAdapterReadSchema Composite In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output: JmsAdapterReadSchema process picked up a message. Input String is Message from JmsAdapterWriteSchema Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too. 4 . Troubleshooting This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as Message handle error ... ORABPEL-09500 ... XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is /ns2:exampleElement. ... etc. check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter. This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post of Prox ‘n’ Funx. This is a very interesting subject. By the way Brad Schulz is my favorite guy when it is about blogging. I respect him as well learn a lot from him. Everybody is writing something new his subject, I decided to start SQL Server 2012 analytic functions series. SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Let us fun following query. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, CUME_DIST() OVER(ORDER BY SalesOrderID) AS CDist FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY CDist DESC GO Above query will give us following result. Now let us understand what is the formula behind CUME_DIST and why the values in SalesOrderID = 43670 are 1. Let us take more example and be clear about why the values in SalesOrderID = 43667 are 0.5. Now let us enhence the same example and use PARTITION BY into the OVER clause and see the results. Run following query in SQL Server 2012. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID DESC, CDist DESC GO Now let us see the result of this query. We are have changed the ORDER BY clause as well partitioning by SalesOrderID. You can see that CUME_DIST() function provides us different results. Additionally now we see value 1 multiple times. As we are using partitioning for each group of SalesOrderID we get the CUME_DIST() value. CUME_DIST() was long awaited Analytical function and I am glad to see it in SQL Server 2012. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Meet our Interns: Adam and Hanadi

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 This week, we’d like to introduce you to two of our ECEMEA Interns, Adam and Hanadi. They’re based in different countries and are part of different teams; however they both have the same enthusiasm in being an Intern at Oracle. “Hi! I’m Adam (Bachelor of Accounting Science & CIMA Diploma in Management Accounting), a member of the Oracle Applications Pre-sales team in Johannesburg, South Africa. Joining Oracle has been a truly inspiring experience thus far. My first week at Oracle has been one of insight and learning. I have had the opportunity to meet and interact with industry leading software solution professionals. Gaining insight into a mammoth multinational company has changed my perception on how things work and has truly opened my eyes to the world of business. Having the privilege of joining the Oracle Graduate Program has afforded me the chance to take advantage of countless training opportunities as well as the chance to learn about Information Technology in a practical manner which is vital to most businesses in today’s modern environment.” “Hi! I’m Hanadi, an Oracle 2013 Sales Intern from Saudi Arabia. I received my BSc in Information Technology from King Saud University and immediately after graduating I applied for the internship at Oracle. I thought it was an incredible opportunity and a great way to shift from college life to career life through learning and practicing in an environment with such high standards. At the beginning, I was a bit nervous in joining the serious business world, but once I joined, I found the program very organized and everyone was extremely helpful, which made it easier for us, as interns, to learn faster. If you are a self-motivated, committed person, who has initiative, accepts challenges, has good soft skills and some technical experience, I would definitely advice you to take a chance and apply for the program once you graduate. Best of luck!” Get the latest updates from the ECEMEA Sales and Presales Internship Programme 2013 by following #Oracleinterns on Twitter or visiting CampusatOracle Facebook Page! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >