Search Results

Search found 14213 results on 569 pages for 'distributed programming'.

Page 486/569 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • JavaOne Rock Star – Adam Bien

    - by Janice J. Heiss
    Among the most celebrated developers in recent years, especially in the domain of Java EE and JavaFX, is consultant Adam Bien, who, in addition to being a JavaOne Rock Star for Java EE sessions given in 2009 and 2011, is a Java Champion, the winner of Oracle Magazine’s 2011 Top Java Developer of the Year Award, and recently won a 2012 JAX Innovation Award as a top Java Ambassador. Bien will be presenting the following sessions: TUT3907 - Java EE 6/7: The Lean Parts CON3906 - Stress-Testing Java EE 6 Applications Without Stress CON3908 - Building Serious JavaFX 2 Applications CON3896 - Interactive Onstage Java EE Overengineering I spoke with Bien to get his take on Java today. He expressed excitement that the smallest companies and startups are showing increasing interest in Java EE. “This is a very good sign,” said Bien. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.” Nashorn is an upcoming JavaScript engine, developed fully in Java by Oracle, and based on the Da Vinci Machine (JSR 292) which is expected to be available for Java 8.    Bien expressed concern about a common misconception regarding Java's mediocre productivity. “The problem is not Java,” explained Bien, “but rather systems built with ancient patterns and approaches. Sometimes it really is ‘Cargo Cult Programming.’ Java SE/EE can be incredibly productive and lean without the unnecessary and hard-to-maintain bloat. The real problems are ‘Ivory Towers’ and not Java’s lack of productivity.” Bien remarked that if there is one thing he wanted Java developers to understand it is that, "Premature optimization is the root of all evil. Or at least of some evil. Modern JVMs and application servers are hard to optimize upfront. It is far easier to write simple code and measure the results continuously. Identify the hotspots first, then optimize.”   He advised Java EE developers to, “Rethink everything you know about Enterprise Java. Before you implement anything, ask the question: ‘Why?’ If there is no clear answer -- just don't do it. Most well known best practices are outdated. Focus your efforts on the domain problem and not the technology.” Looking ahead, Bien remarked, “I would like to see open source application servers running directly on a hypervisor. Packaging the whole runtime in a single file would significantly simplify the deployment and operations.” Check out a recent Java Magazine interview with Bien about his Java EE 6 stress monitoring tool here.

    Read the article

  • Parner Webcast - Innovations in Products Program

    - by Richard Lefebvre
    We are pleased to invite you to join the Innovations in Products –webcast. Innovations in Products will present Oracle Applications' Product's new functions and features including sales positioning. The key objectives of these webcasts are to inspire System Integrator's implementation personnel to conduct successful after sales in their Customer projects. Innovations in Products will be presented on the 1st Monday of each quarter after the billable day (4:00 to 5:00 PM CET). The webcast is intended for System Integrator's Implementation Certified Specialists but Innovations in Products is open for other interested Oracle Applications system Integrator's personnel as well. At first, two Oracle representatives will discuss Oracle's contribution to Partners. Then you will see product breakout session followed by Q&A with Oracle Experts. Each session will last for maximum 1 hour. A Q&A document covering all questions and answers will be made available after the webcast. What are the Benefits for partners? Find out how Innovations in Products helps you to improve your after sales Discover new functions and features so you can enrich your Customers's solution Learn more about Oracle Applications products, especially sales positioning Hear crucial questions raised by colleague alike, learn from their interest Engage and present your questions to subject experts Be inspired of the richness of Oracle Application portfolio – for your and your customer’s benefit Note: Should you already be familiar with a specific Product, then choose another one. Doing so you would expand your knowledge of the overall Applications portfolio. Some presentations contain product demonstration, although these presentations are not intended to be extremely detailed technical presentations. Note: At the latter part of this email you have also 17 links into the recent Applications Products presentations and 6 links into the Public Sector Value Proposition presentations that were presented in Innovations in Industries -program. Product breakout sessions: Topics Speaker To Register Fusion Applications Technology and Extensibility: A next-generation platform that adapts to client needs. Matthew Johnson, Sr. Director, SCM Product Development, EMEA CLICK HERE Fusion Applications - Transforming your Back-Office Accounting Function: Changing how people work in back office functions to drive value add Liam Nolan, Director, ERP Product Development, EMEA CLICK HERE Fusion HCM & Talent Overview & Extensibility: A more in-depth look into a personalized HCM solution Synco Jonkeren, Vice-President HCM Product Development & Management, EMEA CLICK HERE Fusion HCM Compensation Planning: Compensate To Compete Rosie Warner, Director, HCM Sales Development CLICK HERE Enterprise PLM for the Product Value Chain: Oracle Enterprise PLM offers Industry specific solutions that cover the Product Value Chain Ulf Köster, Sales Development Leader Enterprise PLM, Oracle Western Europe CLICK HERE Oracle's Asset Management and Maintenance Solution: What you need to know to successfully implement Oracle Asset Management solutions within Oracle Installed Base Philip Carey, Asset Management and Maintenance Solution Specialist CLICK HERE For more details please visit Innovations in Products and other breakout sessions on OPN page. Delivery Format Innovations in Products –program is a series of FREE prerecorded Applications product presentations followed by Q&A. It will be delivered over the Web. Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. Innovations in Products consists of several parallel prerecorded product breakout sessions, each lasting for max. 1 hour. At first, two Oracle representatives will discuss Oracle’s contribution to Partners. Then you’ll see the product breakout sessions followed by Q&A with Oracle Experts. A Q&A document covering all questions and answers will be made available after the webcast. You can also see Innovations in Products afterwards as its content will be available online for the next 6-12 months. The next Innovations in Products web casts will be presented as follows: July 2nd 2012 October 1st 2012 January 14th 2013 April 8th 2013. Note: Depending on local network bandwidth please allow some seconds time the presentations to download. You might want to refresh your screen by pressing F5. Duration Maximum 1 hour For further information please contact me Markku Rouhiainen. Recent Innovations in Products presentations Applications Products presented on April the 2nd, 2012 Speaker To Register Fusion CRM: Effective, Efficient and Easy James Penfold , Senior Director, Applications Product Development and Product Management CLICK HERE Fusion HCM: Talent management overview performance, goals, talent review Jaime Losantos Viñolas, Director, HCM Sales Development CLICK HERE Distributed Order Management - Fusion SCM Solution Vikram K Singla, Business Development Director, Supply Chain Management Applications, UK CLICK HERE Oracle Transportation Management Dominic Regan, Senior Director Oracle Transportation Management EMEA CLICK HERE Oracle Value Chain Planning: Demantra Sales & Operation Planning and Demantra Demand Management Lionel Albert, Senior Director Value Chain Planning, EMEA CLICK HERE Oracle CX (Customer Experience) - formerly CEM: Powering Great Customer Experiences Maria Ramirez , CRM Presales Consultant, EPC CLICK HERE EPM 11.1.2.2 Overview Nicholas Cox , EMEA Sales Development Director - Enterprise Performance Management CLICK HERE Oracle Hyperion Profitability and Cost Management, 11.1.2.1 Daniela Lazar , Senior EPM Sales Consultant, EPC CLICK HERE January the 16th 2012 Speaker To Register CRM / ATG: Best-in-Class CRM & Commerce Maria Ramirez , Associate CRM Presales Consultant, EPC CLICK HERE CRM / Automate Business Rules for Maximum Efficiency with OPA (Oracle Policy Automation) Marco Nilo, Associate CRM Presales Consultant, EPC CLICK HERE CRM / InQuira Toby Baker, Principal Sales Consultant, CRM Product Specialist Team CLICK HERE EPM / Business Intelligence Foundation Suite – Sales and Product Updates Liviu Nitescu, Senior BI Sales Consultant, EPC CLICK HERE EPM / Hyperion Planning 11.1.2.1 - Sales & Product Updates Andreea Voinea, EPM Sales Consultant, EPC CLICK HERE ERP / JDE EnterpriseOne Fulfillment Management Overview Mirela Andreea Nasta , ERP Presales Consultant, EPC CLICK HERE ERP / Spotlights on iExpenses Elena Nita ,ERP Presales Consultant, EPC CLICK HERE MDM / Master Data Management Martin Boyd , Senior Director Product Strategy CLICK HERE Product break through session Fusion Applications Human Capital Management Rosie Warner , Director, HCM Sales Development CLICK HERE Recent Innovations in Industries Value Proposition presentations January the 16th 2012 Speaker To Register Process Modernisation Iemke Idsingh Public Sector Solutions Director CLICK HERE Shared Services Ann Smith Business Development Director, Shared Services CLICK HERE Strengthening Financial Discipline Whilst Delivering Cashable Savings Philippa Headley UK Sales Development Director Public Sector - EPM Solutions CLICK HERE Social Welfare Industry Solutions Christian Wernberg-Tougaard Industry Director - Social Welfare CLICK HERE Police Industry Solutions Jeff Penrose Solution Sales Director CLICK HERE Tax and Revenue Management Industry Solutions Andre van der Post Global Director - Tax Solutions and Strategy CLICK HERE  

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Play Thousands of Online Radio Stations with Shoutcast in VLC

    - by DigitalGeekery
    Are you looking for more variety from your radio stations? Today we’ll take a look at how to easily stream thousands of radio stations to your desktop with VLC media player. Getting Started Select Media from the menu, go to Services Discovery, and click Shoutcast radio listings.   Next, select View from the menu and click Playlist.   Or, click on the Show Playlist button In the Playlist window, click on Shoutcast radio listings in the left pane. You should then see a very long list of Titles displayed on the right. Scroll though the list to find a music genre or topic that interests you. Double-click to expand the list of station options. Select one of the channel listings from the list and double click to begin playing.   Looking for a specific station? Type a search term into the search filter box to see if it is available.   That’s it. Sit back and enjoy listening to your favorite Internet radio programming. If you are a music or talk radio fan, you aren’t likely to run out of listening options in VLC. Want to find some more uses for VLC? Check out our articles on how to copy a DVD, convert video files to MP3, and how to set a video as your desktop wallpaper. Download VLC Similar Articles Productive Geek Tips Listen to Over 100,000 Radio Stations in Windows Media CenterListen to Local FM Radio in Windows 7 Media CenterListen and Record Over 12,000 Online Radio Stations with RadioSureGeek Reviews: Play And Record Internet Radio With Screamer RadioWeekend Fun: Watch Television on Your PC with AnyTV TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool

    Read the article

  • Game development: “Play Now” via website vs. download & install

    - by Inside
    Heyo, I've spent some time looking over the various threads here on gamedev and also on the regular stackoverflow and while I saw a lot of posts and threads regarding various engines that could be used in game development, I haven't seen very much discussion regarding the various platforms that they can be used on. In particular, I'm talking about browser games vs. desktop games. I want to develop a simple 3D networked multiplayer game - roughly on the graphics level of Paper Mario and gameplay with roughly the same level of interaction as a hack & slash action/adventure game - and I'm having a hard time deciding what platform I want to target with it. I have some experience with using C++/Ogre3D and Python/Panda3D (and also some synchronized/networked programming), but I'm wondering if it's worth it to spend the extra time to learn another language and another engine/toolkit just so that the game can be played in a browser window (I'm looking at jMonkeyEngine right now). For simple & short games the newgrounds approach (go to the site, click "play now", instant gratification) seems to work well. What about for more complex games? Is there a point where the complexity of a game is enough for people to say "ok, I'm going to download and play that"? Is it worth it to go with engines that are less-mature, have less documentation, have fewer features, and smaller communities* just so that a (possibly?) larger audience can be reached? Does it make sense to even go with a web-environment for the kind of game that I want to make? Does anyone have any experiences with decisions like this? Thanks! (* With the exception of flash-based engines it seems like most of the other approaches have these downsides when compared to what is available for desktop-based environments. I'd go with flash, but I'm worried that flash's 3D capabilities aren't mature enough right now to do what I want easily. There's also Unity3D, but I'm not sure how I feel about that at all. It seems highly polished, but requires a plugin to be downloaded for the game to be played -- at that rate I might as well have players download my game.)

    Read the article

  • Am I experienced enough to learn and develop immediately using Ruby on Rails?

    - by acheong87
    General Question I understand that discussions revolving around questions of this form run the risk of becoming too specific to help others. So, perhaps a better, general question would be: What kind of experience, if any, translates easily to Ruby on Rails; and if none, then what's the learning curve like, in comparison to other popular languages? Background I have the opportunity to build a website using whatever technologies I wish to use. It's a fairly simple website, for listing products, taking payments, managing customer data, providing a back-end portal for employees to manage data, possibly hooking in flight information (the products are travel related), possibly integrating a blog and all the social-networking goodies. Specific Problem I have to let the client know by tonight whether I'm interested in taking up this project, before he talks to other potential developers, but I'm on the fence. I already work a full-time C++ development job, so the money doesn't do it for me. It's the opportunity to (be paid to) learn some new technologies and to have a real, running product in the end. I've heard and read great things about Ruby, and am really intrigued. I zipped through some introductory Ruby tutorials, no sweat. However I found the Rails tutorials a little overwhelming, especially not being able to try it out anywhere. And researching Rails hosts like Heroku and EngineYard makes me think that maybe I don't know what I'm getting myself into. The ship's leaving port! I wish I had more time to learn, better yet play with the language, but I have to decide soon! Should I venture or pass? Additional Details My experiences are in C/C++/Tcl/Perl/PHP/jQuery, and basic knowledge of Java/C#. I didn't study C.S. formally so I wasn't exposed to design principles, programming paradigms, etc., which is my greatest concern. Will my lack of understanding in this realm make RoR frustrating to learn? Will it be so incompatible with a C++ "way" of thinking that I'll wish I never started? Am I putting my client at risk by attempting this? If it helps, I'm quick to learn new things (self-taught so far) and care a great deal about correctness, using things for their intended purposes, and so on. I've read numerous recommendations of Agile Development with Rails and would love to read it (though perhaps, while developing in parallel, for shortness of time). Worse comes to worst, I'd give up and do the standard LAMP gig, of course, not charging the client for wasted time. But I'm hoping to avoid the project altogether if it's gonna come down to that! Thanks in advance for any tips, insights, votes of confidence, votes of discouragement (for the better), and such.

    Read the article

  • Windows Azure Recipe: Software as a Service (SaaS)

    - by Clint Edmonson
    The cloud was tailor built for aspiring companies to create innovative internet based applications and solutions. Whether you’re a garage startup with very little capital or a Fortune 1000 company, the ability to quickly setup, deliver, and iterate on new products is key to capturing market and mind share. And if you can capture that share and go viral, having resiliency and infinite scale at your finger tips is great peace of mind. Drivers Cost avoidance Time to market Scalability Solution Here’s a sketch of how a basic Software as a Service solution might be built out: Ingredients Web Role – this hosts the core web application. Each web role will host an instance of the software and as the user base grows, additional roles can be spun up to meet demand. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model provides extensibility to hook into other industry specific identity providers as well. Databases – nearly every modern SaaS application is backed by a relational database for its core operational data. If the solution is sold to organizations, there’s a good chance multi-tenancy will be needed. An emerging best practice for SaaS applications is to stand up separate SQL Azure database instances for each tenant’s proprietary data to ensure isolation from other tenants. Worker Role – this is the best place to handle autonomous background processing such as data aggregation, billing through external services, and other specialized tasks that can be performed asynchronously. Placing these tasks in a worker role frees the web roles to focus completely on user interaction and data input and provides finer grained control over the system’s scalability and throughput. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Blobs (optional) – depending on the nature of the software, users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Training & Examples These links point to online Windows Azure training labs and examples where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Developing Applications for the Cloud, 2nd Edition (eBook) This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud using the latest versions of the Windows Azure Platform and tools. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Fabrikam Shipping (SaaS reference application) This is a full end to end sample scenario which demonstrates how to use the Windows Azure platform for exposing an application as a service. We developed this demo just as you would: we had an existing on-premises sample, Fabrikam Shipping, and we wanted to see what it would take to transform it in a full subscription based solution. The demo you find here is the result of that investigation See my Windows Azure Resource Guide for more guidance on how to get started, including more links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Should I be put off a junior role that uses an online development test?

    - by Ninefingers
    I've applied for a junior development role, or rather been found by a recruiter looking for a developer. In order to get to a telephone interview stage I've been asked to sit one of those online coding assessments. This wasn't quite what I expected. I consider myself a fairly good developer for my age and experience, but I've no illusions about being Don Knuth or anything. The test was a series of incredibly obtuse questions asking about the results of various obscure evaluations. About 30 minutes in I was thinking to myself I hadn't intended to enter an obfuscated code contest/code golf exercise. After my last telephone interview I was asked to build something. I did. That seemed fair. Go away and work this out is more my in office experience of programming than "please evaluate this combination of lambdas, filters, maps, lists, tuples etc". So I'm a little put off, to be honest. I never claimed to know the language inside out or all the little corner cases. My questions, then: Should I be put off? Why? Why not? Are these kinds of tests what I should be expecting for junior roles? Should I learn stuff exam style? That seems to be the objective of these tests, for which you are timed and not supposed to use references or books? Normally, in the course of development I have a fairly good idea of basic types, rules, flow control and whatever. Occasionally I'll come up on something I need to use a regex for and have to go and remind myself of the exact piece of syntax I need if trying what I think should work doesn't. Or I'll come up against a module I've not used before and go and look it up. For example, if I wanted to write a server using sockets in C right now, I'd probably check the last piece of code I wrote doing that (and or the various books I have) and work from there. Chances are I probably couldn't do it exactly from scratch and from memory, although I can tell you you'd need a socket(), bind(), listen() and accept() call and you might also want select() depending on whether you intend to pthread_create or not. So I know what the calls are, but not their specific parameter list. What are your experiences if you are a recruiting manager? Are you after programmers who can quote you the API or do you not mind if your programmers have a few books on their desk and google function calls every so often?

    Read the article

  • iPack -The iOS Application Packager

    - by user13277780
    iOS applications are distributed in .ipa archive files. These files are regular zip files which contain application resources and executable-s. To protect them from unauthorized modifications and to provide identification of their sources, the content of the archives is signed. The signature is included in the application executable of an.ipa archive and protects the executable file itself and the associated resource files. Apple provides native Mac OS tools for signing iOS executable-s (which are actually generic Mach-O code signing tools), but these tools are not generally available on other platforms. To provide a multi-platform development environment for JavaFX based iOS applications, we ported iOS signing and packaging to Java and created a dedicated ipack tool for it. The iPack tool can be used as a last step of creating .ipa package on various operating systems. Prototype has been tested by creating a final distributable for JavaFX application that runs on iPad, all done on Windows 7. Source Code The source code of iPac tool is in OpenJFX project repository. You can find it in: <openjfx root>/rt/tools/ios/Maven/ipack To build the iPack tool use: rt/tools/ios/Maven/ipack$ mvn package After building, you can run the tool: java -jar <path to ipack.jar> <arguments>  Signing keystore The tool uses a java key store to read the signing certificate and the associated private key. To prepare such keystore users can use keytool from JDK. One possible scenario is to import an existing private key and the certificate from a key store used on Mac OS: To list the content of an existing key store and identify the source alias: keytool -list -keystore <src keystore>.p12 -storetype pkcs12 -storepass <src keystore password> To create Java key store and import the private key with its certificate to the keys store: keytool -importkeystore \ -destkeystore <dst keystore> -deststorepass <dst keystore password> \ -srckeystore <src keystore>.p12 -srcstorepass <src keystore password> -srcstoretype pkcs12 \ -srcalias <src alias> -destalias <dst alias> -destkeypass <dst key password> Another scenario would be to generate a private / public key pair directly in a Java key store and create a certificate request from it. After sending the request to Apple one can then import the certificate response back to the Java key store and complete the signing certificate entry. In both scenarios the resulting alias in the Java key store will contain only a single (leaf) certificate. This can be verified with the following command: keytool -list -v -keystore <ipack keystore> -storepass <keystore password> When looking at the Certificate chain length entry, the number next to it is 1. When an executable file is signed on Mac OS, the resulting signature (in CMS format) includes the whole certificate chain up to the Apple Root CA. The ipack tool includes only the chain which is stored under the alias specified on the command line. So to have the whole chain in the signature we need to replace the single certificate entry under the alias with the corresponding full certificate chain. To do that we need first to create the chain in a separate file. It is easy to create such chain when working with certificates in Base-64 encoded PEM format. A certificate chain can be created by concatenating PEM certificates, which should form the chain, into a single file. For iOS signing we need the following certificates in our chain: Apple Root CA Apple Worldwide Developer Relations CA Our signing leaf certificate To convert a certificate from the binary DER format (.der, .cer) to PEM format: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert -file <certificate>.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert -rfc -file <certificate>.pem To export the signing certificate into PEM format: keytool -exportcert -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -rfc -file SigningCert.pem After constructing a chain from AppleIncRootCertificate.pem, AppleWWDRCA.pem andSigningCert.pem, it can be imported back into the keystore with: keytool -importcert -noprompt -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -keypass <key password> -file SigningCertChain.pem To summarize, the following example shows the full certificate chain replacement process: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert1 -file AppleIncRootCertificate.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert1 -rfc -file AppleIncRootCertificate.pem keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert2 -file AppleWWDRCA.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert2 -rfc -file AppleWWDRCA.pem keytool -exportcert -keystore ipack.ks -storepass keystorepwd -alias mycert -rfc -file SigningCert.pem cat SigningCert.pem AppleWWDRCA.pem AppleIncRootCertificate.pem >SigningCertChain.pem keytool -importcert -noprompt -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -file SigningCertChain.pem keytool -list -v -keystore ipack.ks -storepass keystorepwd Usage When the ipack tool is started with no arguments it prints the following usage information: -appname MyApplication -appid com.myorg.MyApplication     Usage: ipack <archive> <signing opts> <application opts> [ <application opts> ... ] Signing options: -keystore <keystore> keystore to use for signing -storepass <password> keystore password -alias <alias> alias for the signing certificate chain and the associated private key -keypass <password> password for the private key Application options: -basedir <directory> base directory from which to derive relative paths -appdir <directory> directory with the application executable and resources -appname <file> name of the application executable -appid <id> application identifier Example: ipack MyApplication.ipa -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -basedir mysources/MyApplication/dist -appdir Payload/MyApplication.app -appname MyApplication -appid com.myorg.MyApplication    

    Read the article

  • How does I/O work for large graph databases?

    - by tjb1982
    I should preface this by saying that I'm mostly a front end web developer, trained as a musician, but over the past few years I've been getting more and more into computer science. So one idea I have as a fun toy project to learn about data structures and C programming was to design and implement my own very simple database that would manage an adjacency list of posts. I don't want SQL (maybe I'll do my own query language? I'm just having fun). It should support ACID. It should be capable of storing 1TB let's say. So with that, I was trying to think of how a database even stores data, without regard to data structures necessarily. I'm working on linux, and I've read that in that world "everything is a file," including hardware (like /dev/*), so I think that that obviously has to apply to a database, too, and it clearly does--whether it's MySQL or PostgreSQL or Neo4j, the database itself is a collection of files you can see in the filesystem. That said, there would come a point in scale where loading the entire database into primary memory just wouldn't work, so it doesn't make sense to design it with that mindset (I assume). However, reading from secondary memory would be much slower and regardless some portion of the database has to be in primary memory in order for you to be able to do anything with it. I read this post: Why use a database instead of just saving your data to disk? And I found it difficult to understand how other databases, like SQLite or Neo4j, read and write from secondary memory and are still very fast (faster, it would seem, than simply writing files to the filesystem as the above question suggests). It seems the key is indexing. But even indexes need to be stored in secondary memory. They are inherently smaller than the database itself, but indexes in a very large database might be prohibitively large, too. So my question is how is I/O generally done with large databases like the one I described above that would be at least 1TB storing a big adjacency list? If indexing is more or less the answer, how exactly does indexing work--what data structures should be involved?

    Read the article

  • Source of (programmer) inefficiency

    - by Daniel
    I am interested to gain a better insight about the possible reasons of personal inefficiency as programmers (and only in programming) due to – simply - our own errors (because we are humans – well, almost all of us). I am not interested in how much we are productive or in how many adjustements the customer asks for when the work is done, but where and how each of us spend that part of its time in tasks that are unproductive and there is no one to blame except ourselves. Excluding ego - feeding and / or self – gratification, what I am trying to get (for all of us) is: what are the common issues eating our time; insight on reasons for that issues; identify simple way for us, personally (not delegating actions to other or our organizations), to correct our own problems. Please, do not think in academic terms but aim at the opportunity to compare our daily experiences and understand what are and how we try to fix our personal deficiencies. If you are interested to respond to this post, please: integrate the list if you see something important (or obvious) missing; highlight or name honestly your first issue tellng the way you try to address and solve your issue acting on yourself and yourself only in a sort of "continuous quality improving" My criteria for accepting the answer is: choose the best solution (feasibility and utility) to fix one (or more) of the problems of the list. Of course, selecting an error is not a vote on our skills: maybe we are hyper professional programmers and we lose ten minutes only every year or we are terribly inefficient, losing a couple of days a week: reasons for inefficiency could be really the same - but in a different scale. A possible list: Plain error in the names (variables, functions). Inability to see the obvious in your code. Misreading. Lack of concentration. Trying to use a technology you have not mastered. Errors with data types. Time required to understand your previous code or your documentation. Trying to do something more than requested because you enjoy it Using solutions more complicated than required because you enjoy it. Plain logical errors. Errors due to your fault in communications. Distraction My first personal issue: "Trying to use a technology you do not master." I have to use daily several technologies and I often need to spend significant time correcting code because my assumptions were plainly wrong. Reasons for this: production needs put high pressure and make difficult to find the time to learn. I try to address this reading technical books - as many as I can - even if this actually consumes a lot of time.

    Read the article

  • Commit Review Questions

    - by Wes McClure
    Note: in this article when I refer to a commit, I mean the commit you plan to share with the rest of the team, if you have local commits that you plan to amend/combine, I am referring to the final result. In time you will find these easier to do as you develop, however, all of these are valuable before checking in!  The pre commit review is a nice time to polish what might have been several hours of intense work, during which these things were the last things on your mind!  If you are concerned about losing your work in the process of responding to these questions, first do a check-in and amend it as you go (assuming you are using a tool such as git that supports this), rolling the result into one nice commit for everyone else.  Did you review your commit, change by change, with a diff utility? If not, this is a list of reasons why you might want to start! Did you test your changes? If the test is valuable to be automated, is it? If it’s a manual testing scenario, did you at least try the basics manually? Are the additions/changes formatted consistently with the rest of the project? Lots of automated tools can help here, don’t try to manually format the code, that’s a waste of time and as a human you will fail repeatedly. Are these consistent: tabs versus spaces, indentation, spacing, braces, line breaks, etc Resharper is a great example of a tool that can automate this for you (.net) Are naming conventions respected? Did you accidently use abbreviations, unless you have a good reason to use them? Does capitalization match the conventions in the project/language? Are files partitioned? Sometimes we add new code in existing files in a pinch, it’s a good idea to split these out if they don’t belong ie: are new classes defined in new files, if this is something your project values? Is there commented out code? If you are removing an existing feature, get rid of it, that is why we have VCS If it’s not done yet, then why are you checking it in? Perhaps a stash commit (git)? Did you leave debug or unnecessary changes? Do you understand all of the changes? http://geekswithblogs.net/wesm/archive/2012/04/11/programming-doesnrsquot-have-to-be-magic.aspx Are there spelling mistakes? Including your commit message! Is your commit message concise? Is there follow up work? Are there tasks you didn’t write down that you need to follow up with? Are readability or reorganization changes needed? This might be amended into the final commit, or it might be future work that needs added to the backlog. Are there other things your team values that you should review?

    Read the article

  • How to Use Your Android Phone as a Modem; No Rooting Required

    - by Jason Fitzpatrick
    If your cellular provider’s mobile hotspot/tethering plans are too pricey, skip them and tether your phone to your computer without inflating your monthly bill. Read on to see how you can score free mobile internet. We recently received a letter from a How-To Geek reader, requesting help linking their Android phone to their laptop to avoid the highway robbery their cellular provider was insisting upon: Dear How-To Geek, I recently found out that my cellphone company charges $30 a month to use your smartphone as a data modem. That’s an outrageous price when I already pay an extra $15 a month charge just because they insist that because I have a smartphone I need a data plan because I’ll be using so much more data than other users. They expect me to pay what amounts to a $45 a month premium just to do some occasional surfing and email checking from the comfort of my laptop instead of the much smaller smartphone screen! Surely there is a work around? I’m running Windows 7 on my laptop and I have an Android phone running Android OS 2.2. Help! Sincerely, No Double Dipping! Well Double Dipping, this is a sentiment we can strongly related to as many of us on staff are in a similar situation. It’s absurd that so many companies charge you to use the data connection on the phone you’re already paying for. There is no difference in bandwidth usage if you stream Pandora to your phone or to your laptop, for example. Fortunately we have a solution for you. It’s not free-as-in-beer but it only costs $16 which, over the first year of use alone, will save you $344. Let’s get started! Latest Features How-To Geek ETC What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices Feel the Chill of the South Atlantic with the Antarctica Theme for Windows 7 Seas0nPass Now Offers Untethered Apple TV Jailbreaking

    Read the article

  • Building a Flash Platformer

    - by Jonathan O
    I am basically making a game where the whole game is run in the onEnterFrame method. This is causing a delay in my code that makes debugging and testing difficult. Should programming an entire platformer in this method be efficient enough for me to run hundreds of lines of code? Also, do variables in flash get updated immediately? Are there just lots of threads listening at the same time? Here is the code... stage.addEventListener(Event.ENTER_FRAME, onEnter); function onEnter(e:Event):void { //Jumping if (Yoshi.y > groundBaseLevel) { dy = 0; canJump = true; onGround = true; //This line is not updated in time } if (Key.isDown(Key.UP) && canJump) { dy = -10; canJump = false; onGround = false; //This line is not updated in time } if(!onGround) { dy += gravity; Yoshi.y += dy; } //limit screen boundaries //character movement if (! Yoshi.hitTestObject(Platform)) //no collision detected { if (Key.isDown(Key.RIGHT)) { speed += 4; speed *= friction; Yoshi.x = Yoshi.x + movementIncrement + speed; Yoshi.scaleX = 1; Yoshi.gotoAndStop('Walking'); } else if (Key.isDown(Key.LEFT)) { speed -= 4; speed *= friction; Yoshi.x = Yoshi.x - movementIncrement + speed; Yoshi.scaleX = -1; Yoshi.gotoAndStop('Walking'); } else { speed *= friction; Yoshi.x = Yoshi.x + speed; Yoshi.gotoAndStop('Still'); } } else //bounce back collision detected { if(Yoshi.hitTestPoint(Platform.x - Platform.width/2, Platform.y - Platform.height/2, false)) { trace('collision left'); Yoshi.x -=20; } if(Yoshi.hitTestPoint(Platform.x, Platform.y - Platform.height/2, false)) { trace('collision top'); onGround=true; //This update is not happening in time speed = 0; } } }

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • SQL SERVER – Concurrancy Problems and their Relationship with Isolation Level

    - by pinaldave
    Concurrency is simply put capability of the machine to support two or more transactions working with the same data at the same time. This usually comes up with data is being modified, as during the retrieval of the data this is not the issue. Most of the concurrency problems can be avoided by SQL Locks. There are four types of concurrency problems visible in the normal programming. 1)      Lost Update – This problem occurs when there are two transactions involved and both are unaware of each other. The transaction which occurs later overwrites the transactions created by the earlier update. 2)      Dirty Reads – This problem occurs when a transactions selects data that isn’t committed by another transaction leading to read the data which may not exists when transactions are over. Example: Transaction 1 changes the row. Transaction 2 changes the row. Transaction 1 rolls back the changes. Transaction 2 has selected the row which does not exist. 3)      Nonrepeatable Reads – This problem occurs when two SELECT statements of the same data results in different values because another transactions has updated the data between the two SELECT statements. Example: Transaction 1 selects a row, which is later on updated by Transaction 2. When Transaction A later on selects the row it gets different value. 4)      Phantom Reads – This problem occurs when UPDATE/DELETE is happening on one set of data and INSERT/UPDATE is happening on the same set of data leading inconsistent data in earlier transaction when both the transactions are over. Example: Transaction 1 is deleting 10 rows which are marked as deleting rows, during the same time Transaction 2 inserts row marked as deleted. When Transaction 1 is done deleting rows, there will be still rows marked to be deleted. When two or more transactions are updating the data, concurrency is the biggest issue. I commonly see people toying around with isolation level or locking hints (e.g. NOLOCK) etc, which can very well compromise your data integrity leading to much larger issue in future. Here is the quick mapping of the isolation level with concurrency problems: Isolation Dirty Reads Lost Update Nonrepeatable Reads Phantom Reads Read Uncommitted Yes Yes Yes Yes Read Committed No Yes Yes Yes Repeatable Read No No No Yes Snapshot No No No No Serializable No No No No I hope this 400 word small article gives some quick understanding on concurrency issues and their relation to isolation level. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Hype and LINQ

    - by Tony Davis
    "Tired of querying in antiquated SQL?" I blinked in astonishment when I saw this headline on the LinqPad site. Warming to its theme, the site suggests that what we need is to "kiss goodbye to SSMS", and instead use LINQ, a modern query language! Elsewhere, there is an article entitled "Why LINQ beats SQL". The designers of LINQ, along with many DBAs, would, I'm sure, cringe with embarrassment at the suggestion that LINQ and SQL are, in any sense, competitive ways of doing the same thing. In fact what LINQ really is, at last, is an efficient, declarative language for C# and VB programmers to access or manipulate data in objects, local data stores, ORMs, web services, data repositories, and, yes, even relational databases. The fact is that LINQ is essentially declarative programming in a .NET language, and so in many ways encourages developers into a "SQL-like" mindset, even though they are not directly writing SQL. In place of imperative logic and loops, it uses various expressions, operators and declarative logic to build up an "expression tree" describing only what data is required, not the operations to be performed to get it. This expression tree is then parsed by the language compiler, and the result, when used against a relational database, is a SQL string that, while perhaps not always perfect, is often correctly parameterized and certainly no less "optimal" than what is achieved when a developer applies blunt, imperative logic to the SQL language. From a developer standpoint, it is a mistake to consider LINQ simply as a substitute means of querying SQL Server. The strength of LINQ is that that can be used to access any data source, for which a LINQ provider exists. Microsoft supplies built-in providers to access not just SQL Server, but also XML documents, .NET objects, ADO.NET datasets, and Entity Framework elements. LINQ-to-Objects is particularly interesting in that it allows a declarative means to access and manipulate arrays, collections and so on. Furthermore, as Michael Sorens points out in his excellent article on LINQ, there a whole host of third-party LINQ providers, that offers a simple way to get at data in Excel, Google, Flickr and much more, without having to learn a new interface or language. Of course, the need to be generic enough to deal with a range of data sources, from something as mundane as a text file to as esoteric as a relational database, means that LINQ is a compromise and so has inherent limitations. However, it is a powerful and beautifully compact language and one that, at least in its "query syntax" guise, is accessible to developers and DBAs alike. Perhaps there is still hope that LINQ can fulfill Phil Factor's lobster-induced fantasy of a language that will allow us to "treat all data objects, whether Word files, Excel files, XML, relational databases, text files, HTML files, registry files, LDAPs, Outlook and so on, in the same logical way, as linked databases, and extract the metadata, create the entities and relationships in the same way, and use the same SQL syntax to interrogate, create, read, write and update them." Cheers, Tony.

    Read the article

  • View the Real Links Behind Shortened URLs in Chrome

    - by Asian Angel
    When you encounter shortened URLs there is always that worry in the back of your mind about where they really lead to. Now you can get a “sneak peak” at the real links behind those URLs with the View Thru extension for Google Chrome. The URL Shortening services officially supported at this time are: bit.ly, cli.gs, ff.im, goo.gl, is.gd, nyti.ms, ow.ly, post.ly, su.pr, & tinyurl.com. Before When you encounter a shortened URL you are pretty much on your own in deciding whether to trust that link or not. It would really be nice if you could just hover your mouse over those links and know where they will lead ahead of time. After Once you have the extension installed you are ready to access that link viewing goodness. Please note that you will need to reload any pages that were open prior to installing the extension. For our first example we chose a shortened URL from “bit.ly”. As you can see the entire link behind the shortened URL is displayed very nicely…no hidden surprises there! Note: There are no options to worry with for the extension. Another perfect result for the “goo.gl URL” shown below. View Thru will certainly remove a lot of the stress related to clicking on shortened URLs. Bonus Find Just out of curiosity we looked for a shortened URL not listed as being officially supported at this time. We found one with the “http://nyti.ms/” domain and View Thru showed the link perfectly…so be sure to give it a try on other services too. Conclusion If you worry about where a shortened URL will really lead you then the View Thru extension can help alleviate that stress. Links Download the View Thru extension (Google Chrome Extensions) Similar Articles Productive Geek Tips See Where Shortened URLs “Link To” in Your Favorite BrowserVerify the Destinations of Shortened URLs the Easy WayCreate Shortened goo.gl URLs in Google Chrome the Easy WayCreate Shortened goo.gl URLs in Your Favorite BrowserAccess Google Chrome’s Special Pages the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

  • Review: Backbone.js Testing

    - by george_v_reilly
    Title: Backbone.js Testing Author: Ryan Roemer Rating: $stars(4.5) Publisher: Packt Copyright: 2013 ISBN: 178216524X Pages: 168 Keywords: programming, testing, javascript, backbone, mocha, chai, sinon Reading period: October 2013 Backbone.js Testing is a short, dense introduction to testing JavaScript applications with three testing libraries, Mocha, Chai, and Sinon.JS. Although the author uses a sample application of a personal note manager written with Backbone.js throughout the book, much of the material would apply to any JavaScript client or server framework. Mocha is a test framework that can be executed in the browser or by Node.js, which runs your tests. Chai is a framework-agnostic TDD/BDD assertion library. Sinon.JS provides standalone test spies, stubs and mocks for JavaScript. They complement each other and the author does a good job of explaining when and how to use each. I've written a lot of tests in Python (unittest and mock, primarily) and C# (NUnit), but my experience with JavaScript unit testing was both limited and years out of date. The JavaScript ecosystem continues to evolve rapidly, with new browser frameworks and Node packages springing up everywhere. JavaScript has some particular challenges in testing—notably, asynchrony and callbacks. Mocha, Chai, and Sinon meet those challenges, though they can't take away all the pain. The author describes how to test Backbone models, views, and collections; dealing with asynchrony; provides useful testing heuristics, including isolating components to reduce dependencies; when to use stubs and mocks and fake servers; and test automation with PhantomJS. He does not, however, teach you Backbone.js itself; for that, you'll need another book. There are a few areas which I thought were dealt with too lightly. There's no real discussion of Test-driven_development or Behavior-driven_development, which provide the intellectual foundations of much of the book. Nor does he have much to say about testability and how to make legacy code more testable. The sample Notes app has plenty of testing seams (much of this falls naturally out of the architecture of Backbone); other apps are not so lucky. The chapter on automation is extremely terse—it could be expanded into a very large book!—but it does provide useful indicators to many areas for exploration. I learned a lot from this book and I have no hesitation in recommending it. Disclosure: Thanks to Ryan Roemer and Packt for a review copy of this book.

    Read the article

  • Unit Testing TSQL

    - by Grant Fritchey
    I went through a period of time where I spent a lot of effort figuring out how to set up unit tests for TSQL. It wasn't easy. There are a few tools out there that help, but mostly it involves lots of programming. well, not as much as before. Thanks to the latest Down Tools Week at Red Gate a new utility has been built and released into the wild, SQL Test. Like a lot of the new tools coming out of Red Gate these days, this one is directly integrated into SSMS, which means you're working where you're comfortable and where you already have lots of tools at your disposal. After the install, when you launch SSMS and get connected, you're prompted to install the tSQLt example database. Go for it. It's a quick way to see how the tool works. I'd suggest using it. It' gives you a quick leg up. The concepts are pretty straight forward. There are a series of CLR commands that you use to configure a test and the test assertions. In between you're calling TSQL, either calls to your structure, queries, or stored procedures. They already have the one things that I always found wanting in database tests, a way to compare tables of results. I also like the ability to create a dummy copy of tables for the tests. It lets you control structures and behaviors so that the tests are more focused. One of the issues I always ran into with the other testing tools is that setting up the tests might require potentially destructive changes to the structure of the database (dropping FKs, etc.) which added lots of time and effort to setting up the tests, making testing more difficult, and therefor, less useful. Functionally, this is pretty similar to the Visual Studio tests and TSQLUnit tests that I used to use. The primary improvement over the Visual Studio tests is that I'm working in SSMS instead of Visual Studio. The primary improvement over TSQLUnit is the SQL Test interface it self. A lot of the functionality is the same, but having a sweet little tool to manage & run the tests from makes a huge difference. Oh, and don't worry. You can still run these tests directly from TSQL too, so automation has not gone away. I'm still thinking about how I'd use this in a dev environment where I also had source control to fret. That might be another blog post right there. I'm just getting started with SQL Test, so this is the first of several blog posts & videos. Watch this space. Try the tool.

    Read the article

  • What type of interview questions should you ask for "legacy" programmers?

    - by Marcus Swope
    We have recently been receiving lots of applicants for our open developer positions from people who I like to refer to as "legacy" programmers. I don't like the term "old" because it seems a little prejudiced (especially to HR!) and it doesn't accurately reflect what I mean. We are a company that does primarily .NET development using TDD in an Agile environment, we use Git as a source control system, we make heavy use of OSS tools and projects and we contribute to them as well, we have a strong bias towards adhering to strong Object-Oriented principles, SOLID, etc, etc, etc... Now, the normal list of questions that we ask doesn't really seem to apply to applicants that are fresh out of school, nor does it seem to apply to these "legacy" programmers. Here is how I (loosely) define a "legacy" programmer. Spent a significant amount of their career working almost exclusively with Assembly/Machine Languages. Primary accomplishments include work done with TANDEM systems. Has extensive experience with technologies like FoxPro and ColdFusion It's not that we somehow think that what we do is "better" than what they do, on the contrary, we respect these types of applicants and we are scared that we may be missing a good candidate. It is just very difficult to get a good read on someone who is essentially speaking a different language than you. To someone like this, it seems a little strange to ask a question like: What is the difference between an abstract class and an interface? Because, I would think that they would almost never know the answer or even what I'm talking about. However, I don't want to eliminate someone who could be a very good candidate in their own right and could be able to eventually learn the stuff that we do. But, I also don't want to just ask a bunch of behavioral questions, because I want to know about their technical background as well. Am I being too naive? Should "legacy" programmers like this already know about things like TDD, source control strategies, and best practices for object-oriented programming? If not, what questions should we ask to get a good representation about whether or not they are still able to learn them and be able to keep up in our fast-paced environment? EDIT: I'm not concerned with whether or not applicants that meet these criteria are in general capable or incapable, as I have already stated that I believe that they can be 100% capable. I am more interested in figuring out how to evaluate their talents, as I am having a hard time figuring out how to determine if they are an A+ "legacy" programmer or if they are a D- "legacy" programmer. I've worked with both.

    Read the article

  • XNA Notes 001

    - by George Clingerman
    Just a quick recap of things I noticed going on in or around the XNA community this past week. I’m sure there’s a lot I missed (it’s a pretty big community with lots of different parts to it) but these where the things I caught that I thought were pretty cool. The XNA Team Michael Klucher gave a list of books every gamer should read. http://twitter.com/#!/mklucher/status/22313041135673344 Shawn Hargreaves posted Nelxon Studio posting about a cheatsheet for converting 3.1 to 4.0 http://blogs.msdn.com/b/shawnhar/archive/2011/01/04/xna-3-1-to-4-0-cheat-sheet.aspx?utm_source=twitterfeed&utm_medium=twitter XNA Game Studio won the Frontline award for Programming Tool by GameDev magazine! Congrats to the XNA team! http://www.gdmag.com/homepage.htm XNA MVPs In January several MVPs were up for re-election, Jim Perry, Andy ‘The ZMan’ Dunn, Glenn Wilson and myself were all re-award a Microsoft MVP award for their contributions to the XNA/DirectX communities. https://mvp.support.microsoft.com/communities/mvp.aspx?product=1&competency=XNA%2fDirectX A movement to get Michael McLaughlin an MVP award has started and you can join in too! http://twitter.com/#!/theBigDaddio/status/22744458621620224 http://www.xnadevelopment.com/MVP/MichaelMcLaughlinMVP.txt Don’t forget you can nominate ANYONE for a MVP award, that’s how they work. https://mvp.support.microsoft.com/gp/mvpbecoming  XNA Developers James Silva of Ska Studios hit 9,200 sales of ZP2KX and recommends you listen to Infected Mushroom. http://twitter.com/#!/Jamezila/status/22538865357094912 http://en.wikipedia.org/wiki/Infected_Mushroom Noogy creator of the upcoming XBLA title Dust an Elysian tail posts some details into his art creation. http://noogy.com/image/statue/statue.html Xbox LIVE Indie Game News Microsoft posts acknowledging there was an issue with the sales data that has been addressed and apologized for not posting about it sooner. http://forums.create.msdn.com/forums/p/71347/436154.aspx#436154 Winter Uprising sales still chugging along and being updated by Xalterax (by those developers willing to actually share sales numbers. Thanks for sharing guys, much appreciated!) http://forums.create.msdn.com/forums/t/70147.aspx Don’t forget about Dream Build Play coming up in February! http://www.dreambuildplay.com/Main/Home.aspx The Best Xbox LIVE Indie Games December Edition comes out on NeoGaf http://www.neogaf.com/forum/showthread.php?t=414485 The Greatest XBox LIVE Indie Games of 2010 on DealSpwn – Congrats to DrMistry and MStarGames for his #1 spot with his massive XBLIG Space Pirates From Tomorrow! http://www.dealspwn.com/xbligoty-2010/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Dealspwn+%28Dealspwn%29 XNA Game Development The future of XACT and WP7 has finally been confirmed and we finally know what our options are for looping audio seamlessly on WP7. http://forums.create.msdn.com/forums/p/61826/436639.aspx#436639  Super Mario 3 Design Notes is an interesting read for XBLIG developers, giving some insight to the training that natural occurs for players as they start playing the game. Good things for XBLIG developers to think about. http://www.significant-bits.com/super-mario-bros-3-level-design-lessons

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >