Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 184/679 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Focus on Oracle Data Profiling and Data Quality 11g - 24/Fev/11

    - by Claudia Costa
    Thursday 24th February, 11am GMTOracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges.Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis.Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data.It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs. Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.  During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.Agenda Oracle Data Integration Strategy overview A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: Oracle Data Profiling Oracle Data Quality for Data Integrator Live demo Q&A  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations received less than 24hours prior to start time may not receive confirmation to attend.To register click here.For any questions please contact [email protected]

    Read the article

  • Interessiert an einem Experience Pass für Java, SQL oder PL/SQL?

    - by britta wolf
    Im Mai startete die Oracle Academy über das "Introduction to Computer Science Programm" erstmals eine Experience Pass Kampagne. Das Introduction-Programm bietet Lehrkräften die Möglichkeit, an speziell organisierten Trainings zu Java, Database Design, SQL oder PL/SQL teilzunehmen und quasi einen ausgewählten Ausbildungspfad zu durchlaufen. Nach erfolgreichem Abschluss (mit einem Oracle Academy Zertifikat) können diese Themen dann an den jeweiligen Schulen unterrichtet werden. Dieses spezielle Ausbildungsprogramm läuft bereits seit mehreren Jahren erfolgreich in Österreich und wird seit Frühjahr 2014 nun auch für deutsche Schulen angeboten! Lehrkräfte, die das Thema Java oder SQL bzw. PL/SQL  bereits seit längerer Zeit unterrichten und kein Ausbildungstraining benötigen, können einen sogennanten Experience Pass anfordern. Mit einem solchen Pass kann man auf die gehosteten Lehrinhalte zugreifen und diese auch im Untericht einsetzen. Benötigen Sie weitere Informationen? Dann kontaktieren Sie mich gerne unter [email protected] 

    Read the article

  • SCCM2012 R2 – How to integrate MDT with SCCM

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2013/11/12/sccm2012-r2--how-to-integrate-mdt-with-sccm.aspxThere are two maybe not competitive but parallel products: Microsoft Deployment Toolkit and System Center Configuration manager. Few years ago I was wondering why they are separate, I why I cannot share Task Sequences between them. And how it usually happens in live, when I was focused on other technologies, MDT and SCCM became best friends. Let's integrate MDT with SCCM: If first step you need to download MDT http://www.microsoft.com/en-us/download/details.aspx?id=25175 Install MDT on your SCCM server boxaccept the EULA Join CEIP if you like  Once you completed the installation I would recommend you to complete MDT configuring before integration with the SCCM. Start the Deployment Workbenchinstall updatesyou will need to download and install WAIKcreate new deployment shareleave default values Create MDT databaseMake sure you will create separate database, DO NOT use existing SCCM DB Now we are ready to integrate MDT with SCCMthe Integration tool should discover your server automaticallyAfter reopening SCCM console in task sequences you should have new cool features: How to use them? That's another story …

    Read the article

  • FairWarning Privacy Monitoring Solutions Rely on MySQL to Secure Patient Data

    - by Rebecca Hansen
    FairWarning® solutions have audited well over 120 billion events, each of which was processed and stored in a MySQL database. FairWarning is the world's leading supplier of privacy monitoring solutions for electronic health records, relied on by over 1,200 Hospitals and 5,000 Clinics to keep their patients' data safe. In January 2014, FairWarning was awarded the highest commendation in healthcare IT as the first ever Category Leader for Patient Privacy Monitoring in the "2013 Best in KLAS: Software & Services" report[1]. FairWarning has used MySQL as their solutions’ database from their start in 2005 to worldwide expansion and market leadership. FairWarning recently migrated their solutions from MyISAM to InnoDB and updated from MySQL 5.5 to 5.6. Following are some of benefits they’ve had as a result of those changes and reasons for their continued reliance on MySQL (from FairWarning MySQL Case Study). Scalability to Handle Terabytes of Data FairWarning's customers have a lot of data: On average, FairWarning customers receive over 700,000 events to be processed daily. Over 25% of their customers receive over 30 million events per day, which equates to over 1 billion events and nearly one terabyte (TB) of new data each month. Databases range in size from a few hundred GBs to 10+ TBs for enterprise deployments (data are rolled off after 13 months). Low or Zero Admin = Few DBAs "MySQL has not required a lot of administration. After it's been tuned, configured, and optimized for size on initial setup, we have very low administrative costs. I can scale and add more customers without adding DBAs. This has had a big, positive impact on our business.” - Chris Arnold, FairWarning Vice President of Product Management and Engineering. Performance Schema  As the size of FairWarning's customers has increased, so have their tables and data volumes. MySQL 5.6’ new maintenance and management features have helped FairWarning keep up. In particular, MySQL 5.6 performance schema’s low-level metrics have provided critical insight into how the system is performing and why. Support for Mutli-CPU Threads MySQL 5.6' support for multiple concurrent CPU threads, and FairWarning's custom data loader allow multiple files to load into a single table simultaneously vs. one at a time. As a result, their data load time has been reduced by 500%. MySQL Enterprise Hot Backup Because hospitals and clinics never stop, FairWarning solutions can’t either. FairWarning changed from using mysqldump to MySQL Enterprise Hot Backup, which has reduced downtime, restore time, and storage requirements. For many of their larger customers, restore time has decreased by 80%. MySQL Enterprise Edition and Product Roadmap Provide Complete Solution "MySQL's product roadmap fully addresses our needs. We like the fact that MySQL Enterprise Edition has everything included; there's no need to purchase separate modules."  - Chris Arnold Learn More>> FairWarning MySQL Case Study Why MySQL 5.6 is an Even Better Embedded Database for Your Products presentation Updating Your Products to MySQL 5.6, Best Practices for OEMs on-demand webinar (audio and / or slides + Q&A transcript) MyISAM to InnoDB – Why and How on-demand webinar (same stuff) Top 10 Reasons to Use MySQL as an Embedded Database white paper [1] 2013 Best in KLAS: Software & Services report, January, 2014. © 2014 KLAS Enterprises, LLC. All rights reserved.

    Read the article

  • Oracle Open World starts on Sunday, Sept 30

    - by Mike Dietrich
    Oracle Open World 2012 starts on Sunday this week - and we are really looking forward to see you in one of our presentations, especially theDatabase Upgrade on SteriodsReal Speed, Real Customers, Real Secretson Monday, Oct 1, 12:15pm in Moscone South 307(just skip the lunch - the boxed food is not healthy at all): Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone South - 307 Database Upgrade on Steroids:Real Speed, Real Customers, Real Secrets Mike Dietrich - Consulting Member Technical Staff, Oracle Georg Winkens - Technical Manager, Amadeus Data Processing Carol Tagliaferri - Senior Development Manager, Oracle  Looking to improve the performance of your database upgrade and learn about other ways to reduce upgrade time? Isn’t everyone? In this session, you will learn directly from Oracle’s Upgrade Development team about what you can do to speed things up. Find out about ways to reduce upgrade downtime such as using a transient logical standby database and/or Oracle GoldenGate, and get other hints and tips. Learn about new features that improve upgrade performance and reduce downtime. Hear Georg Winkens, DB Services technical manager from Amadeus, speak about his upgrade experience, and get real-life performance measurements and advice for a successful upgrade. . And don't forget: we already start on Sunday so if you'd like to learn about the SAP database upgrades at Deutsche Messe: Sunday, Sep 30, 11:15 AM - 12:00 PM - Moscone West - 2001Oracle Database Upgrade to 11g Release 2 with SAP Applications Andreas Ellerhoff - DBA, Deutsche Messe AG Mike Dietrich - Consulting Member Technical Staff, Oracle Jan Klokkers - Sr.Director SAP Development, Oracle Deutsche Messe began to use Oracle6 Database at the end of the 1980s and has been using Oracle Database technology together with SAP applications successfully since 2002. At the end of 2010, it took the first steps of an upgrade to Oracle Database 11g Release 2 (11.2), and since mid-2011, all SAP production systems there run successfully with Oracle Database 11g. This presentation explains why Deutsche Messe uses Oracle Database together with SAP applications, discusses the many reasons for the upgrade to Release 11g, and focuses on the operational top aspects from a DBA perspective. . And unfortunately the Hands-On-Lab is sold out already ... We would like to apologize but we have absolutely ZERO influence on either the number of runs or the number of available seats.  Tuesday, Oct 2, 10:15 AM - 12:45 PM - Marriott Marquis - Salon 12/13 Hands On Lab:Upgrading an Oracle Database Instance, Using Best Practices Roy Swonger - Senior Director, Software Development, Oracle Carol Tagliaferri - Senior Development Manager, Oracle Mike Dietrich - Consulting Member Technical Staff, Oracle Cindy Lim - PMTS, Oracle Carol Palmer - Principal Product Manager, Oracle This hands-on lab gives participants the opportunity to work through a database upgrade from an older release of Oracle Database to the very latest Oracle Database release available. Participants will learn how the improved automation of the upgrade process and the generation of fix-up scripts can quickly help fix database issues prior to upgrading. The lab also uses the new parallel upgrade feature to improve performance of the upgrade, resulting in less downtime. Come get inside information about database upgrades from the Database Upgrade development team. . See you soon

    Read the article

  • APress Deal of the Day - 1/Aug/2013 - Pro SQL Server 2012 Reporting Services

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/08/01/apress-deal-of-the-day---1aug2013---pro-sql.aspxToday's $10 deal of the day from APress at http://www.apress.com/9781430238102 is Pro SQL Server 2012 Reporting Services."Pro SQL Server 2012 Reporting Services opens the door to delivering customizable, web-enabled reports across your business using Microsoft’s enterprise-level reporting platform."

    Read the article

  • What's new in the RightNow November 2012 release?

    - by Richard Lefebvre
    What new in the RightNow November 2012? In order to find out, please watch this tutorial with imbedded demonstration or read the November 2012 Release notes.   News Facts The November 2012 release of     Oracle’s RightNow CX Cloud Service marks the completion of development efforts for 2012 and continues Oracle’s commitment to enhancing the Oracle RightNow offering following the acquisition. New release delivers key capabilities designed to help organizations improve customer experiences in order to increase customer acquisition and retention, while reducing total cost of ownership. Part of the Oracle Cloud, Oracle RightNow CX Cloud Service now integrates Oracle RightNow Chat Cloud Service with Oracle Engagement Engine Cloud Service, helping organizations intelligently and proactively engage with customers through the right channel at the right time. Chat solutions have emerged as an important component of a cross-channel customer experience strategy. According to Forrester Research, Inc., chat adoption has risen dramatically between 2009 and 2011 from 19% to 37%, and it has the highest satisfaction level of all customer service channels at 62% satisfaction. (*) To help companies deliver enhanced customer experiences, Oracle has made significant investments in Oracle RightNow Chat Cloud Service throughout 2012. With the addition of rules-based engagement to existing capabilities such as co-browse, mobile chat, and cross-channel knowledge integration with the contact center, all delivered via the cloud, Oracle RightNow Chat Cloud Service is differentiated as the industry-leading chat solution. The Oracle Cloud offers a broad portfolio of software as-a-service applications, including Oracle Customer Service and Support Cloud Service, which is based on the Oracle RightNow CX Cloud Service. New Capabilities Key Oracle RightNow Chat Cloud Service and other cross-channel capabilities include: Chat Business Rules, with over 70 built-in rule conditions, leverage the Oracle Engagement Engine to help enable organizations capture rich visitor data and invoke complex actions and triggers. Chat Business Rules allow granular control over when to engage a customer via the chat channel based on customer behavior, customer profile information and operational information. Click-to-Call provides the option for a customer to engage with a live agent over the phone during the Web browsing experience. Chat Availability Controls provide organizations with the ability to throttle volume through the chat channel based on real-time agent availability and wait time thresholds. This ability to manage the channel more efficiently allows organizations to provide a better experience to customers using the chat channel. Strategic and Operational Chat Channel Analytics provide better insight into channel and agent productivity and utilization and effectiveness with both out-of-the-box reports and ad hoc reports. New chat channel analytics provide comprehensive metrics with full data transparency. Background Service Updates improve high availability metrics for Oracle RightNow Chat Cloud Service during service update periods, setting the industry leading standard for sales and service delivery to customers via the chat channel. Additional Capabilities include: Improved Web developer tools for more efficient self-service user interface design Improved administration for enhanced user sessions management Increased cross-channel community collaboration Enhanced extensibility widgets and syndication management Streamlined content management and analytics capabilities Read the full announcement here

    Read the article

  • WebCenter Marketing and Upcoming Events

    - by rituchhibber
    Events: Events: Date Event Name Location/Country October 30, 2012 ResCare Solves Content Lifecycle Challenges with Oracle WebCenter Webcast November 1, 2012 Paper Burying Your HR Processes? Dig Your Way Out With Oracle WebCenter! Webcast November 15, 2012 Social Business Thought Leader Webcast: Three Ways to Fix Your Broken Organization, featuring Christian Finn Webcast Marketing: Marketing: WebCenter Sites Sales eVite:Embrace the Base: Create an Exceptional Online Customer Experience with Oracle WebCenter Sites Directs recipients to the Connected Customer Experience Resource Center to see the latest demos, analyst reports, and customer webcasts promoting WebCenter Sites. For more information Click  here. WebCenter Social Business Thought Leaders Series: Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and TechnologyBrian Solis, Altimeter Group digital analyst and futuristDecember 13, 2012 10am PDTRegistration available soon, find other content from this speaker here. Webcast: WebCenter Sites for Applications: Disconnected Online Customer Experience? Connect it with Oracle WebCenter November 8, 2012  eVite | Registration Page WebCenter in Action Customer & Partner webcast series: Started earlier in FY13, a new webcast series featuring WebCenter customer deployments that are executed by a partner.The next webcast in the series will be November 14th:Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter Click here to learn more. OnDemand Webcast: ResCare Solves Content Lifecycle Challenges with Oracle WebCenterComplex documents must be created, assembled, reviewed, and tracked. To avoid fragmented, chaotic information processes, organizations must adopt an integrated set of strategies, standards, best practices, and technologies for managing information. Attend this webcast to learn how Oracle WebCenter has allowed ResCare to: solve content lifecycle challenges, reduce compliance and business risks and increase adoption of intranet as primary business communication tool. On-Demand Assets Date Event Name Location/Country On Demand Avoid Social Media Fatigue - Learn the 9 C’s of Customer Engagement, featuring Ray Wang, Principal Analyst and CEO, Constellation Research Webcast On Demand WebCenter in Action Series: Hitachi Data Systems Improves Global Web Experience with Oracle WebCenter, presented by Hitachi Data Systems and Lingotek. Webcast On Demand Managing Social Relationships for the Enterprise, featuring Jeremiah Owyang, Industry Analyst, Altimeter Group and Reggie Bradford, Vice President, Oracle Webcast On Demand Oracle’s Vision for the Social-Enabled Enterprise, presented by Mark Hurd, Thomas Kurian and Reggie Bradford Webcast On Demand WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, presented by Qualcomm and Keste. Webcast On Demand Social Business Thought Leaders Series: 6 Counterintuitive Best Practices for Social Collaboration Adoption, featuring John Brunswick, Oracle. Webcast On Demand Oracle WebCenter Connects Patients and Researchers in Cancer Control Mission, presented by Canadian Partnership Against Cancer and App-Systems Webcast On Demand Oracle WebCenter: Modernize, Aggregate and Extend Your Portals Webcast On Demand Top 10 Technology Trends Driving Business Innovation, featuring Andy Mulholland, CTO, Capgemini Webcast On Demand Ancestry.com Helps Families Uncover History with Oracl e WebCenter Webcast On Demand Organic Business Networks: Doing Business in a Hyper-Connected World, featuring Mike Fauscette, GVP, IDC Webcast On Demand Social Business and Innovation, featuring John Mancini, President, AIIM Webcast On Demand Do More with Oracle WebCenter: Expand Beyond Web Experience Management Webcast On Demand Race Against the Machine, featuring Andrew McAfee, author and principal scientist at MIT Webcast On Demand Introducing Oracle WebCenter Sites 11gR1: Transforming the Online Experience Webcast On Demand Mobile is the New Face of Engagement, featuring Ted Schadler, Vice President & Principal Analyst, Forrester Research Inc Webcast Analyst Report: IDC Research: Oracle Debuts New Release of Oracle WebCenter Sites.

    Read the article

  • Election 2012: Twitter Breaks Records with MySQL

    - by Bertrand Matthelié
    Twitter VP of Infrastructure Operations Engineering Mazen Rawashdeh shared news and numbers yesterday on his blog: "Last night, the world tuned in to Twitter to share the election results as U.S. voters chose a president and settled many other campaigns. Throughout the day, people sent more than 31 million election-related Tweets (which contained certain key terms and relevant hashtags). And as results rolled in, we tracked the surge in election-related Tweets at 327,452 Tweets per minute (TPM). These numbers reflect the largest election-related Twitter conversation during our 6 years of existence, though they don’t capture the total volume of all Tweets yesterday." "Last night, Twitter averaged about 9,965 TPS from 8:11pm to 9:11pm PT, with a one-second peak of 15,107 TPS at 8:20pm PT and a one-minute peak of 874,560 TPM. Seeing a sustained peak over the course of an entire event is a change from the way people have previously turned to Twitter during live events. Now, rather than brief spikes, we are seeing sustained peaks for hours." Congrats to Jeremy Cole, Davi Arnaut and the rest of the team at Twitter for their excellent work! Jeremy recently held a keynote presentation at MySQL Connect describing how MySQL powers Twitter, and why they chose and continue to rely on MySQL for their operations. You can watch the presentation here. He also went into more details during another presentation later that day and you can access the slides here. Below a couple of tweets from Jeremy after what have surely been hectic days...  Keep up the good work guys!

    Read the article

  • Weblogic 10.3.4 (PS3) nodemanager wont start?

    - by angelo.santagata
    Hi all, well Im back from Australia and one of the things which happened was Oracle announced the PS3 release of oracles SOA & Webcenter products have been released. Now I normally use pre-installed images but I always like to install the products at least once that way I get to see its installation caveats.. Here’s one. Installation on Windows 7 64bit, 64bit JVM, generic weblogic Server installer. All worked fine, EXCEPT I cant start the node manager, I get the following error <08-Feb-2011 17:16:48> <INFO> <Loading domains file: D:\products\wls1034\WLSERV~1.3\common\NODEMA~1\nodemanager.domains> <08-Feb-2011 17:16:48> <SEVERE> <Fatal error in node manager server> weblogic.nodemanager.common.ConfigException: Native version is enabled but nodemanager native library could not be loaded     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:249)     at weblogic.nodemanager.server.NMServerConfig.<init>(NMServerConfig.java:190)     at weblogic.nodemanager.server.NMServer.init(NMServer.java:182)     at weblogic.nodemanager.server.NMServer.<init>(NMServer.java:148)     at weblogic.nodemanager.server.NMServer.main(NMServer.java:390)     at weblogic.NodeManager.main(NodeManager.java:31) Caused by: java.lang.UnsatisfiedLinkError: D:\products\wls1034\wlserver_10.3\server\native\win\32\nodemanager.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform     at java.lang.ClassLoader$NativeLibrary.load(Native Method)     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)     at java.lang.Runtime.loadLibrary0(Runtime.java:823)     at java.lang.System.loadLibrary(System.java:1028)     at weblogic.nodemanager.util.WindowsProcessControl.<init>(WindowsProcessControl.java:17)     at weblogic.nodemanager.util.ProcessControlFactory.getProcessControl(ProcessControlFactory.java:24)     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:247)     ... 5 more Ok it appears that the node manager has gotten confused and thinks this is a 32bit install of Weblogic Server whereas it is the 64bit install.. Might have been something I did, or didnt do, on installation (e.g. –d64 on the jvm command line), however the workaround is pretty easy. 1. Create a file called nodemanager.properties in %WL_HOME%\common\nodemanager on my machine it was D:\products\wls1034\wlserver_10.3\common\nodemanager 2. Add the following line to it NativeVersionEnabled=false 3. And start it up!, this will force it not to use .DLL files and use emulation/non native methods instead..  

    Read the article

  • Silverlight Cream for February 02, 2011 -- #1039

    - by Dave Campbell
    In this Issue: Tony Champion, Gill Cleeren, Alex van Beek, Michael James, Ollie Riches, Peter Kuhn, Mike Ormond, WindowsPhoneGeek(-2-), Daniel N. Egan, Loek Van Den Ouweland, and Paul Thurott. Above the Fold: Silverlight: "Using the AutoCompleteBox" Peter Kuhn WP7: "Windows Phone Image Button" Loek Van Den Ouweland Training: "New WP7 Virtual Labs" Daniel N. Egan Shoutouts: SilverlightShow has their top 5 most popular news articles up: SilverlightShow for Jan 24-30, 2011 Rudi Grobler posted answers he gives to questions about Silverlight - Where do I start? Brian Noyes starts a series of Webinars at SilverlightShow this morning at 10am PDT: Free Silverlight Show Webinar: Querying and Updating Data From Silverlight Clients with WCF RIA Services Join your fellow geeks at Gangplank in Chandler Arizona this Saturday as Scott Cate and AZGroups brings you Azure Boot Camp – Feb 5th 2011 From SilverlightCream.com: Deploying Silverlight with WCF Services Tony Champion takes a step out of his norm (Pivot) and has a post up about deploying WCF Services with your SL app, and how to take the pain out of that without pulling out your hair. Getting ready for Microsoft Silverlight Exam 70-506 (Part 3) Gill Cleeren's part 3 of getting ready for the Silverlight Exam is up at SilverlightShow... with links to the first two parts. There's so much good information linked off these... thanks Gill and 'The Show'! A guide through WCF RIA Services attributes Alex van Beek has a post up you will probably want to bookmark unless you're not using WCF RIA... do you know all the attributes by heart? ... how about an excellent explanation of 10 of them? Using DeferredLoadListBox in a Pivot Control Michael James discusses using the DeferredLoadListBox, and then also using it with the Pivot control... but not without some pain points which he defines and gives the workaround for. WP7: Know your data Ollie Riches' latest is about Data and WP7 ... specifically 'knowing' what data you're needing/using to avoid the 90MB memory limit... He gives a set of steps to follow to measure your data model to avoid getting in trouble. Using the AutoCompleteBox Peter Kuhn takes a great look at the AutoCompleteBox... the basics, and then well beyond with custom data, item templates, custom filters, asynchronous filtering, and a behavior for MVVM async filtering. OData and Windows Phone 7 Part 2 Mike Ormond has part 2 of his OData/WP7 post up... lashing up the images to go along with the code this time out... nice looking app. WP7 RoundToggleButton and RoundButton in depth WindowsPhoneGeek is checking out the RoundToggleButton and RoundButton controls from the Coding4fun Toolkit in detail... of course where to get them, and then the setup, demo project included. All about Dependency Properties in Silverlight for WP7 WindowsPhoneGeek's latest post is a good dependency-property discussion related to WP7 development, but if you're just learning, it's a good place to learn about the subject. New WP7 Virtual Labs Daniel N. Egan posted links to 6 new WP7 Virtual Labs released on 1/25. Windows Phone Image Button Loek Van Den Ouweland has a style up on his blog that gives you an imageButton for your WP7 apps, and a sweet little video showing how it's done in Expression Blend too. Yet another free Windows Phone book for developers Paul Thurott found a link to another Free eBook for WP7 development. This one is by Puja Pramudya and is an English translation of the original, and is an introductory text, but hey... it's free... give it a look! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Taking the fear out of a Cloud initiative through the use of security tools

    - by user736511
    Typical employees, constituents, and business owners  interact with online services at a level where their knowledge of back-end systems is low, and most of the times, there is no interest in knowing the systems' architecture.  Most application administrators, while partially responsible for these systems' upkeep, have very low interactions with them, at least at an operational, platform level.  Of greatest interest to these groups is the consistent, reliable, and manageable operation of the interfaces with which they communicate.  Introducing the "Cloud" topic in any evolving architecture automatically raises the concerns for data and identity security simply because of the perception that when owning the silicon, enterprises are not able to manage its content.  But is this really true?   In the majority of traditional architectures, data and applications that access it are physically distant from the organization that owns it.  It may reside in a shared data center, or a geographically convenient location that spans large organizations' connectivity capabilities.  In the end, very often, the model of a "traditional" architecture is fairly close to the "new" Cloud architecture.  Most notable difference is that by nature, a Cloud setup uses security as a core function, and not as a necessary add-on. Therefore, following best practices, one can say that data can be safer in the Cloud than in traditional, stove-piped environments where data access is segmented and difficult to audit. The caveat is, of course, what "best practices" consist of, and here is where Oracle's security tools are perfectly suited for the task.  Since Oracle's model is to support very large organizations, it is fundamentally concerned about distributed applications, databases etc and their security, and the related Identity Management Products, or DB Security options reflect that concept.  In the end, consumers of applications and their data are to be served more safely in a controlled Cloud environment, while realizing the many cost savings associated with it. Having very fast resources to serve them (such as the Exa* platform) makes the concept even more attractive.  Finally, if a Cloud strategy does not seem feasible, consider the pros and cons of a traditional vs. a Cloud architecture.  Using the exact same criteria and business goals/traditions, and with Oracle's technology, you might be hard pressed to justify maintaining the technical status quo on security alone. For additional information please visit Oracle's Cloud Security page at: http://www.oracle.com/us/technologies/cloud/cloud-security-428855.html

    Read the article

  • AJAX Control Toolkit - Globalization (Language)

    - by Guilherme Cardoso
    For those who use AjaxToolKit controls presenting a language-dependent data (the CalendarExternder with the names of the months and weeks, for example) you can change the language to be presented in a simple way. In Web.Config let's define the primary culture as follows: < system . web > <System. Web> < globalization uiCulture = "pt-pt" culture = "pt-pt" /> <Globalization UICulture = "en-us" culture = "en-us" /> ... ...  In this example I'm using Portuguese. To finish it is necessary to change our ScriptManager. Be it the ToolScriptManager AjaxToolKit or belonging to the ScriptManager's framework. NET, the properties as a vibrant and true are the EnableScriptGlobalization EnableScriptLocalization. < cc1 : ToolkitScriptManager ID = "ToolkitScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true" > <Cc1: ToolkitScriptManager ID = "ToolkitScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true"> </ cc1 : ToolkitScriptManager > </ Cc1: ToolkitScriptManager> or < asp : ScriptManager ID = "ScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true" > <Asp: ScriptManager ID = "ScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true"> </ asp : ScriptManager > </ Asp: ScriptManager>   Like hus we use the controls on AjaxToolKit, always using the Portuguese language. It is important that AjaxTookKit is updated to avoid shortages or errors in translation, though I have not updated by this error in the ModalPopup the latest version, and any controls that I have used are translated correctly.

    Read the article

  • VirtualBox

    - by DesigningCode
    I was wanting to play around with something in a VM the other day.  I was curious what was available for free, if anything, for windows.   I quickly came across Virtual Box  ( http://www.virtualbox.org/ ).   Downloaded, Installed. No Problem!  Works really nicely.   It was commercial software (by sun (now oracle)) that turned open source.   In terms of a license it says :- In summary, the VirtualBox PUEL allows you to use VirtualBox free of charge for personal use or, alternatively, for product evaluation. An interesting feature it has is built in RDP.   Which is useful if you have a guest OS that doesn’t support RDP.   Speaking of RDP…..  which I will in my next blog post… I learnt something REALLY useful the other day.

    Read the article

  • Build Your Own CE6 Kernel

    - by Kate Moss' Big Fan
    The Share Source Program in Windows CE provides many modules in %_WINCEROOT%\Private\ tree, and the kernel is one of them! Although it is not full source of kernel but it is good enough for tracing it, even tweak the kernel. Tracing the kernel and see how it works is lots of fun, but it is fascinated to modify and verify the change you made. So first comes first, where is the source of kernel? It's in your %_WINCEROOT%\private\winceos\COREOS\nk\ And next question will be "How do I build it?", Some of you may say just "build -c" there and it should be good. If you are the owner of kernel and got full source, that is definitely the right answer, but none of them are applied to our case though. So what should I do? Let's dig deeper into the coreos\nk folder, there are a couples of subfolder, CELOG, KDSTUB, KERNEL and etc. KERNEL\ is the main component of kernel.dll, in the other word, most of the modify to kernel is going to happen here. And the good thing is, you could "build -c" in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ with no error at all. But before doing that, remember to backup eveything you are going to modify, including the source and binaries; remember, this is not something belong to you, and if you didn't restore them back later, it could end up confuse the subsequence QFE updates! Here is the steps Backup the source code, I will suggest the whole %_WINCEROOT%\private\winceos\COREOS\nk\ Backup the binaries in common\oak\lib\, and again if you are not sure which files, backup the whole %_WINCEROOT%\common\oak\lib\ is the safest way. Do whatever modification you want in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ build -c in %_WINCEROOT%\private\winceos\COREOS\nk\kernel If everything went well so far, you should get a new nkmain.lib,nkmain.pdb, nkprmain.lib and nkprmain.pdb in %_WINCEROOT%\public\common\oak\lib\%_TGTCPU%\%WINCEDEBUG%\ Basically, you just rebuild your new kernel, the rest is to "blddemo clean -q" to have your new kernel SYSGEN'd and include in your OS Image. Or just "set WINCEREL=1" then "sysgen -p common nk nkprof" and "makeimg" if you can't wait another minutes for "blddemo clean -q" Tat sounds good, but some of you may not like the idea to alter any code in private folder, and not to mention how annoying to backup/restore files every time. Better idea? Yes, Microsoft provides a tool SYSGEN_CAPTURE (http://msdn.microsoft.com/en-us/library/ee504678.aspx for detail and usage) to creates Sources files for public drivers that you want to modify and build in your platform directory. In fact, not only public drivers, virtually anything in the %_WINCEROOT%\public\<project name>\cesysgen\makefile can be captured, and of course including kernel. So I am going to introduce a second way to build your own kernel by using SYSGEN_CAPTURE tool. Again the steps Create a folder in your BSP for building kernel, says %_TARGETPLATROOT%\SRC\Kernel. Use "SYSGEN_CAPTURE -p common nk" and then you will get a SOURCES.KERN, you could also "SYSGEN_CAPTURE -p common nkprof" to generate profiler enabled kernel. rename the SOURCE.KERN to SOURCES and copy one of the sample makefile into your kernel directory. For example the one in PRIVATE\WINCEOS\COREOS\NK\KERNEL\NKNORMAL. Copy the source files you want to modify from private\winceos\coreos\nk\kernel\ into your kernel directory. Modifying the SOURCES= macro to the source files you addes in step 4. For example, if you copied the vm.c, it is going to be SOURCES=vm.c Refer to the private\winceos\COREOS\nk\kernel\sources.inc and add macro defines and proper include path in your SOURCES file. "set WINCEREL=1", "build -c" in your kernel directory and "makeimg", voila! Here is an example for the MACROS you need to add in x86 Here are the macros for x86 CDEFINES=$(CDEFINES) -DIN_KERNEL -DWINCEMACRO -DKERN_CORE # Machine independent defines CDEFINES=$(CDEFINES) -DDBGSUPPORT _COREOSROOT=$(_WINCEROOT)\private\winceos\coreos INCLUDES=$(_COREOSROOT)\inc;$(_COREOSROOT)\nk\inc !IFDEF DP_SETTINGS CDEFINES=$(CDEFINES) -DDP_SETTINGS=$(DP_SETTINGS) !ENDIF ASM_SAFESEH=1 CDEFINES=$(CDEFINES) -Gs100000 -DENCODE_GS_COOKIE

    Read the article

  • Guest Post: Using IronRuby and .NET to produce the &lsquo;Hello World of WPF&rsquo;

    - by Eric Nelson
    [You might want to also read other GuestPosts on my blog – or contribute one?] On the 26th and 27th of March (2010) myself and Edd Morgan of Microsoft will be popping along to the Scottish Ruby Conference. I dabble with Ruby and I am a huge fan whilst Edd is a “proper Ruby developer”. Hence I asked Edd if he was interested in creating a guest post or two for my blog on IronRuby. This is the second of those posts. If you should stumble across this post and happen to be attending the Scottish Ruby Conference, then please do keep a look out for myself and Edd. We would both love to chat about all things Ruby and IronRuby. And… we should have (if Amazon is kind) a few books on IronRuby with us at the conference which will need to find a good home. This is me and Edd and … the book: Order on Amazon: http://bit.ly/ironrubyunleashed Using IronRuby and .NET to produce the ‘Hello World of WPF’ In my previous post I introduced, to a minor extent, IronRuby. I expanded a little on the basics of by getting a Rails app up-and-running on this .NET implementation of the Ruby language — but there wasn't much to it! So now I would like to go from simply running a pre-existing project under IronRuby to developing a whole new application demonstrating the seamless interoperability between IronRuby and .NET. In particular, we'll be using WPF (Windows Presentation Foundation) — the component of the .NET Framework stack used to create rich media and graphical interfaces. Foundations of WPF To reiterate, WPF is the engine in the .NET Framework responsible for rendering rich user interfaces and other media. It's not the only collection of libraries in the framework with the power to do this — Windows Forms does the trick, too — but it is the most powerful and flexible. Put simply, WPF really excels when you need to employ eye candy. It's all about creating impact. Whether you're presenting a document, video, a data entry form, some kind of data visualisation (which I am most hopeful for, especially in terms of IronRuby - more on that later) or chaining all of the above with some flashy animations, you're likely to find that WPF gives you the most power when developing any of these for a Windows target. Let's demonstrate this with an example. I give you what I like to consider the 'hello, world' of WPF applications: the analogue clock. Today, over my lunch break, I created a WPF-based analogue clock using IronRuby... Any normal person would have just looked at their watch. - Twitter The Sample Application: Click here to see this sample in full on GitHub. Using Windows Presentation Foundation from IronRuby to create a Clock class Invoking the Clock class   Gives you The above is by no means perfect (it was a lunch break), but I think it does the job of illustrating IronRuby's interoperability with WPF using a familiar data visualisation. I'm sure you'll want to dissect the code yourself, but allow me to step through the important bits. (By the way, feel free to run this through ir first to see what actually happens). Now we're using IronRuby - unlike my previous post where we took pure Ruby code and ran it through ir, the IronRuby interpreter, to demonstrate compatibility. The main thing of note is the very distinct parallels between .NET namespaces and Ruby modules, .NET classes and Ruby classes. I guess there's not much to say about it other than at this point, you may as well be working with a purely Ruby graphics-drawing library. You're instantiating .NET objects, but you're doing it with the standard Ruby .new method you know from Ruby as Object#new — although, the root object of all your IronRuby objects isn't actually Object, it's System.Object. You're calling methods on these objects (and classes, for example in the call to System.Windows.Controls.Canvas.SetZIndex()) using the underscored, lowercase convention established for the Ruby language. The integration is so seamless. The fact that you're using a dynamic language on top of .NET's CLR is completely abstracted from you, allowing you to just build your software. A Brief Note on Events Events are a big part of developing client applications in .NET as well as under every other environment I can think of. In case you aren't aware, event-driven programming is essentially the practice of telling your code to call a particular method, or other chunk of code (a delegate) when something happens at an unpredictable time. You can never predict when a user is going to click a button, move their mouse or perform any other kind of input, so the advent of the GUI is what necessitated event-driven programming. This is where one of my favourite aspects of the Ruby language, blocks, can really help us. In traditional C#, for instance, you may subscribe to an event (assign a block of code to execute when an event occurs) in one of two ways: by passing a reference to a named method, or by providing an anonymous code block. You'd be right for seeing the parallel here with Ruby's concept of blocks, Procs and lambdas. As demonstrated at the very end of this rather basic script, we are using .NET's System.Timers.Timer to (attempt to) update the clock every second (I know it's probably not the best way of doing this, but for example's sake). Note: Diverting a little from what I said above, the ticking of a clock is very predictable, yet we still use the event our Timer throws to do this updating as one of many ways to perform that task outside of the main thread. You'll see that all that's needed to assign a block of code to be triggered on an event is to provide that block to the method of the name of the event as it is known to the CLR. This drawback to this is that it only allows the delegation of one code block to each event. You may use the add method to subscribe multiple handlers to that event - pushing that to the end of a queue. Like so: def tick puts "tick tock" end timer.elapsed.add method(:tick) timer.elapsed.add proc { puts "tick tock" } tick_handler = lambda { puts "tick tock" } timer.elapsed.add(tick_handler)   The ability to just provide a block of code as an event handler helps IronRuby towards that very important term I keep throwing around; low ceremony. Anonymous methods are, of course, available in other more conventional .NET languages such as C# and VB but, as usual, feel ever so much more elegant and natural in IronRuby. Note: Whether it's a named method or an anonymous chunk o' code, the block you delegate to the handling of an event can take arguments - commonly, a sender object and some args. Another Brief Note on Verbosity Personally, I don't mind verbose chaining of references in my code as long as it doesn't interfere with performance - as evidenced in the example above. While I love clean code, there's a certain feeling of safety that comes with the terse explicitness of long-winded addressing and the describing of objects as opposed to ambiguity (not unlike this sentence). However, when working with IronRuby, even I grow tired of typing System::Whatever::Something. Some people enjoy simply assuming namespaces and forgetting about them, regardless of the language they're using. Don't worry, IronRuby has you covered. It is completely possible to, with a call to include, bring the contents of a .NET-converted module into context of your IronRuby code - just as you would if you wanted to bring in an 'organic' Ruby module. To refactor the style of the above example, I could place the following at the top of my Clock class: class Clock include System::Windows::Shape include System::Windows::Media include System::Windows::Threading # and so on...   And by doing so, reduce calls to System::Windows::Shapes::Ellipse.new to simply Ellipse.new or references to System::Windows::Threading::DispatcherPriority.Render to a friendlier DispatcherPriority.Render. Conclusion I hope by now you can understand better how IronRuby interoperates with .NET and how you can harness the power of the .NET framework with the dynamic nature and elegant idioms of the Ruby language. The manner and parlance of Ruby that makes it a joy to work with sets of data is, of course, present in IronRuby — couple that with WPF's capability to produce great graphics quickly and easily, and I hope you can visualise the possibilities of data visualisation using these two things. Using IronRuby and WPF together to create visual representations of data and infographics is very exciting to me. Although today, with this project, we're only presenting one simple piece of information - the time - the potential is much grander. My day-to-day job is centred around software development and UI design, specifically in the realm of healthcare, and if you were to pay a visit to our office you would behold, directly above my desk, a large plasma TV with a constantly rotating, animated slideshow of charts and infographics to help members of our team do their jobs. It's an app powered by WPF which never fails to spark some conversation with visitors whose gaze has been hooked. If only it was written in IronRuby, the pleasantly low ceremony and reduced pre-processing time for my brain would have helped greatly. Edd Morgan blog Related Links: Getting PhP and Ruby working on Windows Azure and SQL Azure

    Read the article

  • Oracle + Sun Product Strategy Webcast Series

    - by Paulo Folgado
    The Oracle + Sun Product Strategy Webcast series is composed of informative, on-demand sessions that offer strategies for Sun's major product lines related to the company combination, explain how Oracle will deliver more innovation to our customers, and outline our approach to protecting customers' investments. Ranging from 5 to 27 minutes each, the Webcasts cover the strategies for hardware, systems, software, solutions, and partners.In addition, Judson Althoff, SVP, Worldwide Alliances and Channels, Oracle, followed up the Webcast series with a video FAQ to help answer the following top partner questions about the Oracle + Sun combination and the OPN Specialized program: What is the impact the overall combined company will have on the partners?What are Oracle's plans for selling direct and what is the impact to partners?How will Sun partners integrate into OPN Specialized?As a Sun partner, am I automatically migrated into OPN Specialized?Will Oracle continue to partner with other hardware vendors?How will Oracle map existing Sun investments and certifications into OPN Specialized?As a Sun partner new to Oracle, where should I be placing my focus?What can partners expect to see relative to Exadata V2?How do content delivery platforms (CDPs) fit into the Oracle framework?How do existing Sun Partners place orders?

    Read the article

  • All New MySQL For Beginners Training on Demand Offering

    - by Antoinette O'Sullivan
    Get started on MySQL for Beginners training within 24 hours with the newly released MySQL for Beginners Training on Demand. With Training on Demand, you get: - Trained by top MySQL Instructors - Access to hands-on practice environment - Full classroom content available 24/7 - And no travel expenses to worry about The MySQL for Beginners course covers all the basics and gets you on your way with a solid foundation. This hands-on class covers the fundamentals of SQL and relational databases, using MySQL as a teaching tool. In addition to the Training on Demand option, you have the choice to taking the MySQL for Beginners course as: Live Virtual Training: Live, interactive, online training delivered by MySQL instructor to you anywhere you have an internet connection. 100s of events on the schedule for different timezones. In-Classroom Training: Scheduled events include those listed below:  Location  Date  Delivery Language  Warsaw, Poland  16 July 2012  Polish  Dublin, Ireland 15 October 2012  English  Belfast, Ireland  28 August 2012  English  Rome, Italy  5 November 2012  Italy  Hamburg, Germany  3 December 2012  German  Lisbon, Portugal  5 November 2012  European Portugese  Amsterdam, Netherlands  10 December 2012  Dutch  Nieuwegein, Netherlands  17 September 2012  Dutch  Barcelona, Spain  5 November 2012  Spanish  Riga, Latvia  15 July 2012  Latvian  Petaling Jaya, Malaysia  7 August 2012  English  Ottawa, Canada  7 August 2012  English  Toronto, Canada  7 August 2012  English  Montreal, Canada  7 August 2012  English  Sao Paulo, Brazil  10 July 2012  Brazilan Portugese For more information on any of the MySQL for Beginners training options or to learn more about the Authorized MySQL curriculum go to the Oracle University portal and click on MySQL.

    Read the article

  • Everythings changes, except that it doesn&rsquo;t

    - by Dennis Vroegop
    You may have noticed this. Microsoft launched a new product this week. Well, launched is a strong word, they announced it. They call it.. The Surface! What is it? Well, it’s a cool looking family of tablets designed for Windows 8. And I have to say: they look stunning and I can’t wait to have one. There’s just one thing.. The name..Where have I heard that before? Why, indeed! Yes, it is also the name of a new paradigm in computing: Surface Computing. You may have read something about that, here on this blog for instance. Well, in order to prevent confusion they have decided to rename Surface to Pixelsense. You know, the technology that drives the ‘camera’ inside the Samsung SUR40 for Microsoft Surface…. So now, when we talk about Surface, we mean the new tablet. When we talk about Pixelsense we mean that big table… So there you have it… Welcome PixelSense! For more info see http://www.surface.com for the tablet and http://www.pixelsense.com for Pixelsense.

    Read the article

  • Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved a world record 18,000 concurrent users experiencing subsecond response time while executing a PeopleSoft Payroll batch job of 500,000 employees in 32.4 minutes. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 47% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. Performance Landscape Results are presented for the PeopleSoft HRMS Self-Service and Payroll combined benchmark. The new result with 128 streams shows significant improvement in the payroll batch processing time with little impact on the self-service component response time. PeopleSoft HRMS Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.988 0.539 32.4 128 SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.944 0.503 43.3 64 The following results are for the PeopleSoft HRMS Self-Service benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the payroll component. PeopleSoft HRMS Self-Service 9.1 Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) 2x SPARC T4-2 (db) 18,000 1.048 0.742 N/A N/A The following results are for the PeopleSoft Payroll benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the self-service component. PeopleSoft Payroll (N.A.) 9.1 - 500K Employees (7 Million SQL PayCalc, Unicode) Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-4 (db) N/A N/A N/A 30.84 96 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 PeopleTools 8.52 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Micro Focus Server Express (COBOL v 5.1.00) Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. A total of 128 PeopleSoft streams server processes where used on the database node to complete payroll batch job of 500,000 employees in 32.4 minutes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Managementoracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 8 November 2012.

    Read the article

  • Oracle VM Templates for EBS 12.1.3 for Exalogic Now Available

    - by Elke Phelps (Oracle Development)
    Oracle VM Templates for Oracle E-Business Suite 12.1.3 for x86 Exalogic Platform (64 bit) are now available on the Oracle Software Delivery Cloud.  The templates contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server. You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install).   The Oracle E-Business Suite Release 12.1.3 (64 bit) template for the Exalogic platform is a Oracle Virtual Server Guest template that contains a complete Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Installation.  For additional details, please refer to the following My Oracle Support Note: Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Template for Oracle Exalogic Platform (Note 1499132.1) The Oracle E-Business Suite system is installed on top of Oracle Linux Version 5 update 6. The templates have been optimized for performance, including OS kernel settings and E-Business Suite configuration settings tuned specifically for the Exalogic platform.  The configuration delivered with this template for a mid-tier running on Exalogic will support hundreds of concurrent users.  Please refer to Section 2: Performance Analysis in My Oracle Support Note 1499132.1 for additional details.   Additional Information The Oracle E-Business Suite VM templates for the Exalogic platform contain the following software versions: Operating System: Oracle Linux Version 5 Update 6 Oracle E-Business Suite 12.1.3 (Database Tier) Oracle E-Business Suite 12.1.3 (Application Tier) The following considerations were made when the Oracle E-Business Suite VM template for the Exalogic platform were designed: Templates use the hardware-virtualized architecture, supporting hardware with virtualization feature. Database Tier Template is configured to use the following configuration: 16 GB RAM 4 VCPUs 250 GB of Disk space for application installation Application Tier Template is configured to use the following configuration: 16 GB RAM 4 VCPUs 50 GB of Disk space for application installation References Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Template for Oracle Exalogic Platform (Note 1499132.1) Related Articles Part 1: E-Business Suite 12.1.1 Templates for Oracle VM Now Available Part 2: Using Oracle VM with Oracle E-Business Suite Virtualization Kit Part 3: On Clouds and Virtualization in EBS Environments (OpenWorld 2009 Recap) Part 4: Deploying E-Business Suite on Amazon Web Services Elastic Compute Cloud Part 5: Live Migration of EBS Services Using Oracle VM Support Policies for Virtualization Technologies and Oracle E-Business Suite Virtualization and the E-Business Suite, Redux Virtualization and E-Business Suite

    Read the article

  • APress Deal of the Day 31/Jul/2013 - Pro ASP.NET MVC 4

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/07/31/apress-deal-of-the-day-31jul2013---pro-asp.net-mvc.aspxToday's $10 deal of the day from APress at http://www.apress.com/9781430242369 is Pro ASP.NET MVC 4"The ASP.NET MVC 4 Framework is the latest evolution of Microsoft’s ASP.NET web platform. It provides a high-productivity programming model that promotes cleaner code architecture, test-driven development, and powerful extensibility, combined with all the benefits of ASP.NET"

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >