Search Results

Search found 23062 results on 923 pages for 'multiple models'.

Page 460/923 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • Introducing the First Global Web Experience Management Content Management System

    - by kellsey.ruppel
    By Calvin Scharffs, VP of Marketing and Product Development, Lingotek Globalizing online content is more important than ever. The total spending power of online consumers around the world is nearly $50 trillion, a recent Common Sense Advisory report found. Three years ago, enterprises would have to translate content into 37 language to reach 98 percent of Internet users. This year, it takes 48 languages to reach the same amount of users.  For companies seeking to increase global market share, “translate frequently and fast” is the name of the game. Today’s content is dynamic and ever-changing, covering the gamut from social media sites to company forums to press releases. With high-quality translation and localization, enterprises can tailor content to consumers around the world.  Speed and Efficiency in Translation When it comes to the “frequently and fast” part of the equation, enterprises run into problems. Professional service providers provide translated content in files, which company workers then have to manually insert into their CMS. When companies update or edit source documents, they have to hunt down all the translated content and change each document individually.  Lingotek and Oracle have solved the problem by making the Lingotek Collaborative Translation Platform fully integrated and interoperable with Oracle WebCenter Sites Web Experience Management. Lingotek combines best-in-class machine translation solutions, real-time community/crowd translation and professional translation to enable companies to publish globalized content in an efficient and cost-effective manner. WebCenter Sites Web Experience Management simplifies the creation and management of different types of content across multiple channels, including social media.  Globalization Without Interrupting the Workflow The combination of the Lingotek platform with WebCenter Sites ensures that process of authoring, publishing, targeting, optimizing and personalizing global Web content is automated, saving companies the time and effort of manually entering content. Users can seamlessly integrate translation into their WebCenter Sites workflows, optimizing their translation and localization across web, social and mobile channels in multiple languages. The original structure and formatting of all translated content is maintained, saving workers the time and effort involved with inserting the text translation and reformatting.  In addition, Lingotek’s continuous publication model addresses the dynamic nature of content, automatically updating the status of translated documents within the WebCenter Sites Workflow whenever users edit or update source documents. This enables users to sync translations in real time. The translation, localization, updating and publishing of Web Experience Management content happens in a single, uninterrupted workflow.  The net result of Lingotek Inside for Oracle WebCenter Sites Web Experience Management is a system that more than meets the need for frequent and fast global translation. Workflows are accelerated. The globalization of content becomes faster and more streamlined. Enterprises save time, cost and effort in translation project management, and can address the needs of each of their global markets in a timely and cost-effective manner.  About Lingotek Lingotek is an Oracle Gold Partner and is going to be one of the first Oracle Validated Integrator (OVI) partners with WebCenter Sites. Lingotek is also an OVI partner with Oracle WebCenter Content.  Watch a video about how Lingotek Inside for Oracle WebCenter Sites works! Oracle WebCenter will be hosting a webinar, “Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter," tomorrow, September 13th. To attend the webinar, please register now! For more information about Lingotek for Oracle WebCenter, please visit http://www.lingotek.com/oracle.

    Read the article

  • Subsumption architecture vs. perceptual control theory

    - by Yasir G.
    I'm a new person to AI field and I have to research and compare 2 different architectures for a thesis I'm writing. Before you scream (homework thread), I've been reading on these 2 topics only to find that I'm confusing myself more.. let me first start with stating briefly what I know so far. Subsumption is based on the fact that targets of a system are different in sophistication, thus that requires them to be added as layers, each layer can suppress (modify) the command of the layers below it, and there are inhibitors to stop signals from execution lets say. PCT stresses on the fact that there are nodes to handle environmental changes (negative feedback), so the inputs coming from an environment go through a comparator node and then an action is generated by that node, HPCT or (Hierarchical PCT) is based on nesting these cycles inside each other so a small cycle to avoid crashing would be nested in a more sophisticated cycle that targets a certain location for example. My questions, am I getting this the right way? am I missing any critical understanding about these 2 models? also any idea where I can find simplified explanations for each theory (so far been struggling trying to understand the papers from Google scholar :< ) /Y

    Read the article

  • HOWTO Turn off SPARC T4 or Intel AES-NI crypto acceleration.

    - by darrenm
    Since we released hardware crypto acceleration for SPARC T4 and Intel AES-NI support we have had a common question come up: 'How do I test without the hardware crypto acceleration?'. Initially this came up just for development use so developers can do unit testing on a machine that has hardware offload but still cover the code paths for a machine that doesn't (our integration and release testing would run on all supported types of hardware anyway).  I've also seen it asked in a customer context too so that we can show that there is a performance gain from the hardware crypto acceleration, (not just the fact that SPARC T4 much faster performing processor than T3) and measure what it is for their application. With SPARC T2/T3 we could easily disable the hardware crypto offload by running 'cryptoadm disable provider=n2cp/0'.  We can't do that with SPARC T4 or with Intel AES-NI because in both of those classes of processor the encryption doesn't require a device driver instead it is unprivileged user land callable instructions. Turns out there is away to do this by using features of the Solaris runtime loader (ld.so.1). First I need to expose a little bit of implementation detail about how the Solaris Cryptographic Framework is implemented in Solaris 11.  One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.  The alternate to this is having the application coded to call getisax() and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so, and the unfortunately misnamed due to historical reasons libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  For SPARC T4 that would be: export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" and for Intel systems with AES-NI support: export LD_HWCAP="-aes" This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  It also works for the Oracle DB and Java JCE.  However does not work for the default enabled OpenSSL "t4" or "aes-ni" engines (unfortunately) because they do explicit calls to getisax() themselves rather than using multiple ELF cap sections. However we can still use OpenSSL to demonstrate this by explicitly selecting "pkcs11" engine  using only a single process and thread.  $ openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 54170.81k 187416.00k 489725.70k 805445.63k 1018880.00k $ LD_HWCAP="-aes" openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 29376.37k 58328.13k 79031.55k 86738.26k 89191.77k We can clearly see the difference this makes in the case where AES offload to the SPARC T4 was disabled. The "t4" engine is faster than the pkcs11 one because there is less overhead (again on a SPARC T4-1 using only a single process/thread - using -multi you will get even bigger numbers). $ openssl speed -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 85526.61k 89298.84k 91970.30k 92662.78k 92842.67k Yet another cool feature of the Solaris linker/loader, thanks Rod and Ali. Note these above openssl speed output is not intended to show the actual performance of any particular benchmark just that there is a significant improvement from using hardware acceleration on SPARC T4. For cryptographic performance benchmarks see the http://blogs.oracle.com/BestPerf/ postings.

    Read the article

  • Cloud Without Compromise – Oracle Fusion HCM

    - by Jay Richey, HCM Product Marketing
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} We’ve all heard about the cloud, and many HR organizations have already launched cloud initiatives. But too many cloud HCM vendors can’t deliver on their promise to lower costs, reduce risk and improve efficiency. When only 5% of CEOs are satisfied with HR*, something needs to change. Only Oracle delivers the promise of the cloud in deployment models tailored to your needs – giving you cloud without compromise. Oracle Fusion HCM provides a unified system with all the analytics and reporting tools you need. Join us for an engaging and insightful webcast this Wednesday, November 16th, at 9am Pacific to learn more about how Oracle Fusion HCM can fulfill your promise. http://www.oracle.com/us/dm/sev100018463-wwmk11040178mpp002-521274.html

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • OPA Mobile Now Available on iTunes AppStore and Google Play

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 A free standalone app demonstrating the power of Oracle Policy Automation (OPA) Interviews is available on both Apple’s iTunes AppStore and Google Play (for Android). Later in 2014 customers will be able to deploy their own policy models to the mobile app using the new OPA Hub! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • What's a "Cloud Operating System"?

    - by user12608550
    What's a "Cloud Operating System"? Oracle's recently introduced Solaris 11 has been touted as "The First Cloud OS". Interesting claim, but what exactly does it mean? To answer that, we need to recall what characteristics define a cloud and then see how Solaris 11's capabilities map to those characteristics. By now, most cloud computing professionals have at least heard of, if not adopted, the National Institute of Standards and Technology (NIST) Definition of Cloud Computing, including its vocabulary and conceptual architecture. NIST says that cloud computing includes these five characteristics: On-demand self-service Broad network access Resource pooling Rapid elasticity Measured service How does Solaris 11 support these capabilities? Well, one of the key enabling technologies for cloud computing is virtualization, and Solaris 11 along with Oracle's SPARC and x86 hardware offerings provides the full range of virtualization technologies including dynamic hardware domains, hypervisors for both x86 and SPARC systems, and efficient non-hypervisor workload virtualization with containers. This provides the elasticity needed for cloud systems by supporting on-demand creation and resizing of application environments; it supports the safe partitioning of cloud systems into multi-tenant infrastructures, adding resources as needed and deprovisioning computing resources when no longer needed, allowing for pay-only-for-usage chargeback models. For cloud computing developers, add to that the next generation of Java, and you've got the NIST requirements covered. The results, or one of them anyway, are services like the new Oracle Public Cloud. And Solaris is the ideal platform for running your Java applications. So, if you want to develop for cloud computing, for IaaS, PaaS, or SaaS, start with an operating system designed to support cloud's key requirements…start with Solaris 11.

    Read the article

  • View AccuWeather Forecasts in Google Chrome

    - by Asian Angel
    Being able to keep an eye on the weather while at work or browsing the Internet is definitely helpful. If you like detailed forecasts then join us as we take a look at the Forecastfox Weather extension for Google Chrome. Getting Started As soon as the Forecastfox Weather extension has finished installing you will automatically be presented with the “Customize Forecastfox Page”. The default setting is for New York with English measurement units. Enter your location into the blank and hit “Enter” to display the listing for your city/area. If you are presented multiple options to choose from simply click on the appropriate listing. Once you have your city/area displayed you will notice that it is possible to have access to weather forecasts for multiple locations. You can easily remove any unneeded listings with the “Remove Link”. For our example we removed the New York listing. Note: Click on desired locations and measurement units to automatically set them as defaults (no save button required). Forecastfox Weather in Action You can hover your mouse over the “Toolbar Button” to see the current weather conditions. Clicking on the “Toolbar Button” opens a popup window with the current conditions, 7 day forecast, and a static satellite image. If desired you can access additional details for the current weather conditions. Clicking on “details” opens a new tab with a nice bit of information such as UV Index, Moon Phases, Cloud Ceiling, etc. Note: AccuWeather.com webpages will have some ads displayed. Perhaps you need the Hourly Forecast… Once again a new tab will be opened with the predicted hourly weather conditions for the current day. Going back to the popup window you may also select a specific day from the 7 day forecast. You will be presented with a “Day & Night” forecast for the chosen day with links to view “Additional Details & Hourly” information. Interested in the satellite image instead? You can click on either of the available links for larger images. Once the new tab is open you can choose from a variety of different satellite images. Conclusion If you have been wanting a solid weather forecast extension for your Chrome browser then Forecastfox Weather is definitely a recommended install. Links Download the Forecastfox Weather extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Add Weather Forecasts to Google ChromeView Weather Underground Forecasts in Google ChromeView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeGoogle Image Search Quick Fix TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Hide and Unhide Worksheets and Workbooks in Excel 2007 & 2010

    - by DigitalGeekery
    Hiding worksheets can be a simple way to protect data in Excel, or just a way reduce the clutter of a some tabs. Here are a couple very easy ways to hide and unhide worksheets and workbooks in Excel 2007 / 2010. Hiding a Worksheet Select the Worksheet you’d like to hide by clicking on the tab at the bottom. By holding down the Ctrl key while clicking you can select multiple tabs at one time. On the Home tab, click on Format, which can be found in the Cells group. Under Visibility,  select Hide & Unhide, then Hide Sheet.   You can also simply right-click on the tab, and select Hide.   Your worksheet will no longer be visible, however, the data contained in the worksheet can still be referenced on other worksheets.   Unhide a Worksheet To unhide a worksheet, you just do the opposite. On the Home tab, click on Format in the Cells group and then under Visibility,  select Hide & Unhide, then Unhide Sheet.   Or, you can right-click on any visible tab, and select Unhide.   In the Unhide pop up window, select the worksheet to unhide and click “OK.” Note: Although you can hide multiple sheets at once, you can only unhide one sheet at a time. Very Hidden Mode While hidden mode is nice, it’s not exactly ultra-secure. If you’d like to pump the security up a notch, there is also Very Hidden mode. To access Very Hidden setting, we’ll have to use the built-in Visual Basic Editor by hitting the Alt + F11 keys. Select the worksheet you wish to hide from the dropdown list under Properties or by single clicking the worksheet in the VBAProject window. Next, set the Visible property to  2 – xlSheetVeryHidden. Close out of the Visual Basic Editor when finished.   When the Very Hidden attribute is set on a worksheet, Unhide Sheet is still unavailable from within the Format setting on the Home tab.   To remove the Very Hidden attribute and display the worksheet again, go back into the Visual Basic Editor by hitting Alt + F11 again and setting the Visible property back to –1 – xlSheetVisible.  Close out of the Editor when finished. Hiding a Workbook To hide the entire Workbook, select the View tab, and then click the Hide button. You’ll see the Workbook has disappeared. Unhide a Workbook Select the View tab and click Unhide… … and your Workbook will be visible again.   Just a few simple ways to hide and unhide your Excel worksheets and workbooks. Similar Articles Productive Geek Tips How To Copy Worksheets in Excel 2007 & 2010Add Background Pictures To Excel 2007 WorksheetsMake Row Labels In Excel 2007 Freeze For Easier ReadingImport Microsoft Access Data Into ExcelMagnify Selected Cells In Excel 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster

    Read the article

  • Which problem(s) do YOU want to see solved?

    - by buu700
    My team and I are meeting tonight to come up with a business plan and some community input would be amazing. I've been mulling over this issue for the past few months and bouncing ideas off of others, and now I'd finally like some input from the community. I have come up with a fair selection of ideas, but most of those amount to either fun projects which could potentially be profitable, or otherwise solid business models that have one or two major hurdles (usually related to resources or legality). For our team meeting tonight, my idea is to take inventory of our available skills, resources, and compelling problems which interest us. The last is where I would greatly appreciate some community input. Hell, even entire business ideas/plans would be appreciated. No matter how big or small your thoughts, any input would be appreciated. We're a team of computer scientists, so our business will be primarily based around software/technology/Web solutions. Among my relevant available resources (entire Internet aside), I have the following: A pretty reliable connection to an SEO company a large production company. A stash of fairly powerful server hardware. A fast network with static IPs. The backend for Hackswipe, which includes credit card payment processing and a Google Voice-based SMS gateway. This work in progress design for something completely unrelated but which is backed by some fairly decent infrastructure. Direct access to the experts in just about any relevant field (on-campus Carnegie Mellon professors). A sexual relationship with the baron of a small nation. For further down the line, some investor relationships. Not likely to be so relevant, but a decent social media presence (Stack Overflow reputation, modship in some major reddits, various tech forums). The source code for Eugene fucking McCabe. Pooled with the other team members, the list of projects we can build off of would be longer (including an Android app). So, what are your thoughts? Crossposted to reddit

    Read the article

  • Seamless STP with Oracle SOA Suite

    - by user12339860
    STP stands for “Straight Through Processing”. Wikipedia describes STP as a solution that enables “the entire trade process for capital markets and payment transactions to be conducted electronically without the need for re-keying or manual intervention, subject to legal and regulatory restrictions” .I will deal with the later part of the definition i.e “payment transactions without manual intervention” in this article. The STP that I am writing about involves the interaction between a Bank and its’ corporate customers,to that extent this business case is also called “Corporate Payments”.Simply put a  Corporate Payment-STP solution needs to connect the payment transaction right from the Corporate ERP into the Bank’s Payment Hub. A SOA based STP solution can do a lot more than just process transaction. But before I get to the solution let me describe the perspectives of the two primary parties in this interaction. The Corporate customer and the Bank. Corporate's Interaction with Bank:  Typically it is the treasury department of an enterprise which interacts with the Bank on a daily basis. Here is how a day of interaction would look like from the treasury department of a corp. Corporate Cash Retrieve Beginning of day totals Monitor Cash Accounts Send or receive cash between accounts Supply chain payments Payment Settlements Calculate settlement positions Retrieve End of Day totals Assess Transaction Financial Impact Short Term Investment Desk Retrieve Current Account information Conduct Investment activities Bank’s Interaction with the Corporate :  From the Bank’s perspective, the interaction starts from the point of on boarding a corporate customer to billing the corporate for the value added services it provides. Once the corporate is on-boarded the daily interaction involves Handle the various formats of data arriving from customers Process Beginning of Day & End of Day reporting request from customers Meet compliance requirements Process Payments Transmit Payment Status Challenges with this Interaction :  Both the Bank & the Corporate face many challenges from these interactions. Some of the challenges include Keeping a consistent view of transaction data for various LOBs of the corporate & the Bank Corporate customers use different ERPs, hence the data formats are bound to be different Can the Bank’s IT systems convert the data formats that can be easily mapped to the corporate ERP How does the Bank manage the communication profiles of these customers?  Corporate customers are demanding near real time visibility on their corporate accounts Corporate customers can make better cash management decisions if they can analyse the impact. Can the Bank create opportunities to sell its products to the investment desks at corporate houses & manage their orders? How will the Bank bill the corporate customer for the value added services it provides. What does a SOA based Seamless STP solution bring to the table? Highlights of Oracle SOA based STP solution For the Corporate Customer: No Manual or Paper based banking transactions Secure Delivery of Payment data to the Bank from multiple ERPs without customization Single Portal for monitoring & administering payment transactions Rule based validation of payments Customer has data necessary for more effective handling of payment and cash management decisions  Business measurements track progress toward payment cost goals  For the Bank: Reduces time & complexity of transactions Simplifies the process of introducing new products to corporate customers Single Payment hub for all corporate ERP payments across multiple instruments New Revenue sources by delivering value added services to customers Leverages existing payment infrastructure Remove Inconsistent data formats and interchange between bank and corporate systems  Compliance and many other benefits

    Read the article

  • Too sell or give for free

    - by QAH
    Hello everyone! I am currently making a game that I was originally planning to sell. It is a simple 2D arcade style game for the PC. I've seen many indie games become popular and generate revenue from advertisements, but the game itself remains free. I need some advice on whether or not I should sell my game, release it for free with advertisements, or ask for donations and keep the game free. I feel that my game is fun, but of course the graphics aren't tip top because I am a programmer, not an artist. I just take screenshots of 3D models I get from Turbosquid and crop around it to make a sprite. Also, and I could be very wrong about this, it seems that there are more legal issues surrounding selling a game than making it free and generating revenue from advertisement, or asking for donations. If I am wrong, someone please correct me. Also, I am very interested in generating some revenue for my work, but that isn't at the very top of my list. I am in my last year of high school, soon to be going to college, and I am going to major in computer science/software engineering. So I am trying to gain some preliminary experience at home by coding stuff every day. One way of getting this experience is by making this game. So what do you think? What route should I take? What has worked well with other indie games? Thanks in advance.

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part II, Translation

    - by Ralf Westphal
    In my previous post I summarized the notation for Flow-Design (FD) diagrams. Now is the time to show you how to translate those diagrams into code. Hopefully you feel how different this is from UML. UML leaves you alone with your sequence diagram or component diagram or activity diagram. They leave it to you how to translate your elaborate design into code. Or maybe UML thinks it´s so easy no further explanations are needed? I don´t know. I just know that, as soon as people stop designing with UML and start coding, things end up to be very different from the design. And that´s bad. That degrades graphical designs to just time waste on paper (or some designer). I even believe that´s the reason why most programmers view textual source code as the only and single source of truth. Design and code usually do not match. FD is trying to change that. It wants to make true design a first class method in every developers toolchest. For that the first prerequisite is to be able to easily translate any design into code. Mechanically, without thinking. Even a compiler could do it :-) (More of that in some other article.) Translating to Methods The first translation I want to show you is for small designs. When you start using FD you should translate your diagrams like this. Functional units become methods. That´s it. An input-pin becomes a method parameter, an output-pin becomes a return value: The above is a part. But a board can be translated likewise and calls the nested FUs in order: In any case be sure to keep the board method clear of any and all business logic. It should not contain any control structures like if, switch, or a loop. Boards do just one thing: calling nested functional units in proper sequence. What about multiple input-pins? Try to avoid them. Replace them with a join returning a tuple: What about multiple output-pins? Try to avoid them. Or return a tuple. Or use out-parameters: But as I said, this simple translation is for simple designs only. Splits and joins are easily done with method translation: All pretty straightforward, isn´t it. But what about wires, named pins, entry points, explicit dependencies? I suggest you don´t use this kind of translation when your designs need these features. Translating to methods is for small scale designs like you might do once you´re working on the implementation of a part of a larger design. Or maybe for a code kata you´re doing in your local coding dojo. Instead of doing TDD try doing FD and translate your design into methods. You´ll see that way it´s much easier to work collaboratively on designs, remember them more easily, keep them clean, and lessen the need for refactoring. Translating to Events [coming soon]

    Read the article

  • Moving monarchs and dragons: migrating the JDK bugs to JIRA

    - by darcy
    Among insects, monarch butterflies and dragonflies have the longest migrations; migrating JDK bugs involves a long journey as well! As previously announced by Mark back in March, we've been working according to a revised plan to transition the JDK bug management from Sun's legacy system to initially an Oracle-internal JIRA instance which is afterward made visible and usable externally. I've been busily working on this project for the last few months and the team has made good progress on many aspects of the effort: JDK bugs will be imported into JIRA regardless of age; bugs will also be imported regardless of state, including closed bugs. Consequently, the JDK bug project will start pre-populated with over 100,000 existing bugs, some dating all the way back to 1994. This will allow a continuity of information and allow new issues to be linked to old ones. Using a custom import process, the Sun bug numbers will be preserved in JIRA. For example, the Sun bug with bug number 4040458 will become "JDK-4040458" in JIRA. In JIRA the project name, "JDK" in our case, is part of the bug's identifier. Bugs created after the JIRA migration will be numbered starting at 8000000; bugs imported from the legacy system have numbers ranging between 1000000 and 79999999. We're working with the bugs.sun.com team to try to maintain continuity of the ability to both read JDK bug information as well as to file new incidents. At least for now, the overall architecture of bugs.sun.com will be the same as it is today: it will be a gateway bridging to an Oracle-internal system, but the internal system will change to JIRA from the legacy database. Generally we are aiming to preserve the visibility of bugs currently viewable on bugs.sun.com; however, bugs in areas not related to the JDK will not be visible after the transition to JIRA. New incoming incidents will be sent to a separate JIRA project for initial triage before possibly being moved into the JDK project. JDK bug management leans heavily on being able to track the state of bugs in multiple releases, especially to coordinate delivering synchronized security releases (known as CPUs, critital patch updates, in Oracle parlance). For a security release, it is common for half a dozen or more release trains to be affected (for example, JDK 5, JDK 6 update, OpenJDK 6, JDK 7 update, JDK 8, virtual releases for HotSpot express, etc.). We've determined we need to track at least the tuple of (release, responsible engineer/assignee for the release, status in the release) for the release trains a fix is going into. To do this in JIRA, we are creating a separate port/backport issue type along with a custom link type to allow the multiple release information to be easily grouped and presented together. The Sun legacy system had a three-level classification scheme, product, category, and subcategory. Out of the box, JIRA only has a one-level classification, component. We've implemented a custom second-level classification, subcomponent. As part of the bug migration we've taken the opportunity to think about how bugs should be grouped under a two-level system and we'll the new system will be simpler and more regular. The main top-level components of the JDK product will include: core-libs client-libs deploy install security-libs other-libs tools hotspot For the libs areas, the primary name of the subcomportment will be the package of the API in question. In the core-libs component, there will be subcomponents like: java.lang java.lang.class_loading java.math java.util java.util:i18n In the tools component, subcomponents will primarily correspond to command names in $JDK/bin like, jar, javac, and javap. The first several bulk imports of the JDK bugs into JIRA have gone well and we're continuing to refine the import to have greater fidelity to the current data, including by reconstructing information not brought over in a structured fashion during the previous large JDK bug system migration back in 2004. We don't currently have a firm timeline of when the new system will be usable externally, but as it becomes available, I'll share further information in follow-up blog posts.

    Read the article

  • Uganda .NET Usergroup April meeting

    - by Malisa L. Ncube
    Our April meeting was presented by Wilson Kutegeka on the topic of Building the Data Access a layer. In his presentation he showed a tool which he has developed to generate the entities, stores procedures that would be used to reduce having to retype the same boilerplate code for each entity. He uses visual basic samples to demonstrate access to the data from the database and inherits his classes from an abstract class which contains common properties including connection strings, save and delete methods. A number of questions emerged from the group, mostly those that use a business model based approaches. Some of the questions are on unit testing and mocking the models without using the database, the use of IoCs and loose coupled patterns. Some of the questions were on caching, Linq support and data annotations based validation. The presentation details can be found here. Intellisense LTD agreed to sponsor our website and we are glad to have that as we really need to have a website running. We would like to thank the following companies for supporting our community activities: Apress, Telerik, Manning, DevExpress (CodeRush), Ncover, and Intellisense.   Technorati Tags: Uganda .NET Usergroup

    Read the article

  • Hardware compatibility on H97 chipset/hardware support

    - by user3238850
    I am aware that there is documentation about compatibility but it is way out dated. I am also aware that there is a hardware compatibility page on Ubuntu website, but that one is focused on the whole box rather than a single piece of hardware. I have some experience with Linux OS, and some experience playing Ubuntu Server in a virtual machine, but never worked on a machine that lives in the real internet. I am building a home server with an Intel H97 chipset motherboard. I have looked at several models and none of them has Linux in the supported OS category. I have the experience of installing Ubuntu Desktop 14.04 on my 4-years-old lap top, and except for some system errors on start up, there is not too much I can complain about, so I guess I should be fine. However, this time I am going to install Ubuntu Server 14.04 on a relatively new piece of hardware(I went to http://linux-drivers.org/ but found nothing really helpful). For example the ASUS motherboard has M.2 socket and Intel LAN I218V chip, the Gigabyte motherboard has two LAN chips(Intel LAN WGI217V and ATHEROS AR8161-BL3A-R). So I really want to make sure everything will work. Usually I would just trust Ubuntu and buy all hardware I need, but basing on my past experience with the Ubuntu Desktop version on my lap top, I am not so convinced. There is an easily noticeable difference: when the system is idle, the fan runs much more frequently and longer under Ubuntu. This leads to my suspicion that generally hardware will have worse support for Ubuntu, which is no surprising at all but enough for me to put this post here. And as far as I know, some Intel CPU features come with software that usually will not run under Linux. Any help, idea or thoughts would be greatly appreciated!

    Read the article

  • The Endeca UI Design Pattern Library Returns

    - by Joe Lamantia
    I'm happy to announce that the Endeca UI Design Pattern Library - now titled the Endeca Discovery Pattern Library - is once again providing guidance and good practices on the design of discovery experiences.  Launched publicly in 2010 following several years of internal development and usage, the Endeca Pattern Library is a unique and valued source of industry-leading perspective on discovery - something I've come to appreciate directly through  fielding the consistent stream of inquiries about the library's status, and requests for its rapid return to public availability. Restoring the library as a public resource is only the first step!  For the next stage of the library's evolution, we plan to increase the scope of the guidance it offers beyond user interface design to the broader topic of discovery.  This could include patterns for architecture at the systems, user experience, and business levels; information and process models; analytical method and activity patterns for conducting discovery; and organizational and resource patterns for provisioning discovery capability in different settings.  We'd like guidance from the community on the kinds of patterns that are most valuable - so make sure to let us know. And we're also considering ways to increase the number of patterns the library offers, possibly by expanding the set of contributors and the authoring mechanisms. If you'd like to contribute, please get in touch. Here's the new address of the library: http://www.oracle.com/goto/EndecaDiscoveryPatterns And I should say 'Many thanks' to the UXDirect team and all the others within the Oracle family who helped - literally - keep the library alive, and restore it as a public resource.

    Read the article

  • Orchestrating the Virtual Enterprise, Part I

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's Chief Sustainability Officer & Vice President, SCM Product Strategy During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems. In my next post, I will share examples of companies that have made that shift and talk more about the distributed orchestration process.

    Read the article

  • Implementing an automatic navigation mesh generation for 2d top down map?

    - by J2V
    I am currently in the middle of implementing an A* pathfinding for enemies. In order to implement the actual A* logic, I need a navigation mesh for my map. I am working on a 2D top down rpg map. The world is static, meaning there is no requirement for dynamic runtime mesh generation. My world objects are pixel based, not tile based and have associated data with them such as scale, rotation, origin etc. I will obviously need some vertex data being generated from my world objects, maybe create a polygon generation from color data? I could create a colormap with objects for my whole map, but I have no idea how to begin creating nav mesh polygons. How would an actual navigation mesh generation look like with this kind of available information? Can anyone maybe point to some great resources? I have looked into some 3D nav mesh tools, but they seem kind of overly complex for my situation and also have a lot of their req data available from models. Thanks a lot in advance! I have been trying to get my head around it for some time now.

    Read the article

  • Should a Parent with Children have a DefaultChild, or should a Child have a Default property?

    - by Stijn
    Which of the following two models makes more sense? I'm leaning towards the first one because there can only be one default child. The examples are in C# but I think it can apply to other languages too. Here DefaultChild holds one of the items in Children. class Parent { int ID { get; set; } Child DefaultChild { get; set; } IEnumerable<Child> Children { get; set; } } class Child { int ID { get; set; } } Here one of the items in Children has Default set to true while the others have it set to false. class Parent { int ID { get; set; } IEnumerable<Child> Children { get; set; } } class Child { int ID { get; set; } bool Default { get; set; } } A concrete situation: a User in our system has one or more Customers attached. When logging in, if said User has a default Customer, they are immediately working under this Customer. If they don't, they have to select a Customer to work under. While logged in, they can switch between Customers.

    Read the article

  • Integrating with Oracle Fusion Applications: Discovering Integration Artifacts

    - by Lionel Dubreuil
    Oracle Enterprise Repository serves as the core element to the Oracle SOA Governance solution. An industry-leading metadata repository, Oracle Enterprise Repository provides a solid foundation for delivering governance throughout the service-oriented architecture (SOA) lifecycle by acting as the single source of truth for information surrounding SOA assets and their dependencies. For Fusion Applications, the use of OER has been extended to include other integration asset types such as interface tables and other technical information such as data models, tables, views, lookups, profile options, et cetera. E-Business Suite users familiar with iRepository or eTRM will recognize the functionality in Fusion Applications OER. Oracle Enterprise Repository for Fusion Applications provides a common catalog of technical information, searchable using many different mechanisms. Customers can locate technical information by the name, description or keyword of the information they are looking for. They can also search by the type of asset they are trying to locate and/or where the asset sits in the product taxonomy. They can also see the how the asset dances in the choreography of some illustrative co-existence scenarios. These scenarios are laid out as both functional flow diagrams as well as technical interaction diagrams. Rajesh Raheja, software architect at Oracle, has recently posted an article on this topic: visibility and control are the key tenets to SOA governance, and the first step in integrating with Oracle Fusion Applications is to find out what are the integration options available. Oracle Enterprise Repository, an industry-leading metadata repository, provides this visibility. You can find his full blog post here.

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

  • MPI Cluster Debugger launch integration in VS2010

    Let's assume that you have all the HPC bits installed and that you have existing MPI code (or you created a "Hello World" project using the MPI project template). Of course, you create a single MPI application and at runtime it will correspond to multiple processes (of the same app) launched on multiple nodes (i.e. machines) on the cluster. So how do you debug such a situation by simply hitting the familiar "F5" keystroke (i.e. Debug - Start Debugging)?WATCH IT INSTEAD OF READING ABOUT ITIf you can't bear to read through all the details below, just watch this 19-minute screencast explaining this VS2010 feature. Alternatively, or even additionally, keep on reading.REQUIREMENTWhen you debug an MPI application, you would want the copying of resources from your client machine (where Visual Studio is installed) to each compute node (where Windows HPC Server is installed) to take place automatically for you. 'Resources' in the previous sentence includes your application binary, plus any binary or data dependencies it may have, plus PDBs if needed, plus the debug CRT of the correct bitness, plus msvsmon for remote debugging to work. You would also want, after copying is complete, to have your app and msvsmon launched and attached so that you can hit breakpoints back in Visual Studio on your client machine. All these thing that you would want are delivered in VS2010.STEPS TO F51. In your MPI project where you have placed a breakpoint go to Project Properties - Configuration Properties - Debugging. Ensure the "Debugger to launch" combo box value is set to MPI Cluster Debugger.2. There are a whole bunch of properties here and typically you can ignore all of them except one: Run Environment. By default it is set to run 1 process on your local machine and if you change the number after that to, for example, 4 it will launch 4 processes of your app on your local machine.You want this to run on your cluster though, so go to the dropdown arrow at the end of the Run Environment cell and open it to expose the "Edit Hpc node" menu which opens the Node Selector dialog:In this dialog you can enter (or pick from a list) the cluster head node name and then the number of processes you want to execute on the cluster and then hit OK and… you are done.3. Press F5 and watch your breakpoint get hit (after giving it some time for copying, remote execution, attachment and symbol resolution to take place).GOING DEEPERIn the MPI Cluster Debugger project properties above, you can see many additional properties to the Run Environment. They are all optional, but you may want to understand them in order to fine tune your cluster debugging. Read all about each one of these on the MSDN page Configuration Properties for the MPI Cluster Debugger.In the Node Selector dialog above you can see more options than just the Head Node name and Number of Process to run. They should be self-explanatory but I also cover them in depth in my screencast showing you an example of why you would choose to schedule processes per core versus per node. You can also read about these options on MSDN as part of the page How to: Configure and Launch the MPI Cluster Debugger.To read through an example that touches on MPI project creation, project properties, node selector, and also usage of MPI with OpenMP plus MPI with PPL, read the MSDN page Walkthrough: Launching the MPI Cluster Debugger in Visual Studio 2010.Happy MPI debugging! Comments about this post welcome at the original blog.

    Read the article

  • User Produtivity Kit - Powerful Packages (Part 1)

    - by [email protected]
    User Productivity Kit provides the ability to create a variety of content types including robust topics on system process and web pages with formatted text and graphics. There are times when you want to enhance content with media types not naively created by User Productivity Kit, media types such as video, custom animations, forms, and more. One method of doing this is to maintain these media files on a web server - separate from the User Productivity Kit player content and link to the files using absolute URLs such as http://myserver/overview.html. While this will get you going, you won't benefit from the content management capabilities of the UPK Developer. Features such as check-in / check-out, history, document properties, folder permissions and more are not available to this external content. Further, if you ever need to move that content to a server with a different name or domain, you'd need to update all your links. UPK version 3.1 introduced a new document type - the package. A package is a group of folders and files that you manage in the Developer library as a single document. These package documents work in the same manner as any other document in the library and you can use all of the collaborative content development features you see with other document types. Packages can be used for anything from single Word documents, PDF files, and graphics to more intricate sets of inter-related files commonly seen with HTML files and their graphics, style sheets, and JavaScript files. The structure of the files and folders within a package will always be preserved so this means that any relative links between files in the package will work. For example, an HTML file containing an image tag with a relative link to a graphic elsewhere in the same package will continue to function properly both when viewed in the Developer and when published to outputs such as the UPK Player. Once you start to use packages, you'll soon discover that there is a lot of existing content that can be re-purposed by placing it into UPK packages. Packages are easily created by selecting File...New...Package. Files can be added in a number of ways including the "Add Files" button, copy & paste from Windows Explorer, and drag & drop. To use one of the files in the package, just create a link to the file in the package you want to target. This is supported throughout the Developer in places such as section & topic concepts, frame links and hyperlinks in web pages. A little more challenging is determining how to structure packages in your library. As I mentioned earlier, a package can contain anything from a single file to dozens of files and folders. So what should you do? You could create a package for each file. You could create one package for all your files. But which one is right? Well, there's not a right and wrong answer to this question. There are advantages and disadvantages to each. The right decision will be influenced by the package files themselves, the structure of the content in the library, the size and working style of the development team, how content is shared between different outlines and more. The first consideration can be assessed the quickest. If the content to be placed in the package is composed of multiple files and those files reference each other, they should be in the same package. There are loads of examples of this type of content. HTML files with graphics and style sheets, HTML files with embedded Flash movies, and Word documents saved as HTML are all examples where the content is composed of multiple files and the files reference each other in some way. Content like this should always be placed in a singe package such that these relative links between the files are preserved and play properly in the UPK Player. In upcoming posts, I'll explain additional considerations.

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >