Search Results

Search found 5857 results on 235 pages for 'david michael rice'.

Page 29/235 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • A Good Developer is So Hard to Find

    - by James Michael Hare
    Let me start out by saying I want to damn the writers of the Toughest Developer Puzzle Ever – 2. It is eating every last shred of my free time! But as I've been churning through each puzzle and marvelling at the brain teasers and trivia within, I began to think about interviewing developers and why it seems to be so hard to find good ones.  The problem is, it seems like no matter how hard we try to find the perfect way to separate the chaff from the wheat, inevitably someone will get hired who falls far short of expectations or someone will get passed over for missing a piece of trivia or a tricky brain teaser that could have been an excellent team member.   In shops that are primarily software-producing businesses or other heavily IT-oriented businesses (Microsoft, Amazon, etc) there often exists a much tighter bond between HR and the hiring development staff because development is their life-blood. Unfortunately, many of us work in places where IT is viewed as a cost or just a means to an end. In these shops, too often, HR and development staff may work against each other due to differences in opinion as to what a good developer is or what one is worth.  It seems that if you ask two different people what makes a good developer, often you will get three different opinions.   With the exception of those shops that are purely development-centric (you guys have it much easier!), most other shops have management who have very little knowledge about the development process.  Their view can often be that development is simply a skill that one learns and then once aquired, that developer can produce widgets as good as the next like workers on an assembly-line floor.  On the other side, you have many developers that feel that software development is an art unto itself and that the ability to create the most pure design or know the most obscure of keywords or write the shortest-possible obfuscated piece of code is a good coder.  So is it a skill?  An Art?  Or something entirely in between?   Saying that software is merely a skill and one just needs to learn the syntax and tools would be akin to saying anyone who knows English and can use Word can write a 300 page book that is accurate, meaningful, and stays true to the point.  This just isn't so.  It takes more than mere skill to take words and form a sentence, join those sentences into paragraphs, and those paragraphs into a document.  I've interviewed candidates who could answer obscure syntax and keyword questions and once they were hired could not code effectively at all.  So development must be more than a skill.   But on the other end, we have art.  Is development an art?  Is our end result to produce art?  I can marvel at a piece of code -- see it as concise and beautiful -- and yet that code most perform some stated function with accuracy and efficiency and maintainability.  None of these three things have anything to do with art, per se.  Art is beauty for its own sake and is a wonderful thing.  But if you apply that same though to development it just doesn't hold.  I've had developers tell me that all that matters is the end result and how you code it is entirely part of the art and I couldn't disagree more.  Yes, the end result, the accuracy, is the prime criteria to be met.  But if code is not maintainable and efficient, it would be just as useless as a beautiful car that breaks down once a week or that gets 2 miles to the gallon.  Yes, it may work in that it moves you from point A to point B and is pretty as hell, but if it can't be maintained or is not efficient, it's not a good solution.  So development must be something less than art.   In the end, I think I feel like development is a matter of craftsmanship.  We use our tools and we use our skills and set about to construct something that satisfies a purpose and yet is also elegant and efficient.  There is skill involved, and there is an art, but really it boils down to being able to craft code.  Crafting code is far more than writing code.  Anyone can write code if they know the syntax, but so few people can actually craft code that solves a purpose and craft it well.  So this is what I want to find, I want to find code craftsman!  But how?   I used to ask coding-trivia questions a long time ago and many people still fall back on this.  The thought is that if you ask the candidate some piece of coding trivia and they know the answer it must follow that they can craft good code.  For example:   What C++ keyword can be applied to a class/struct field to allow it to be changed even from a const-instance of that class/struct?  (answer: mutable)   So what do we prove if a candidate can answer this?  Only that they know what mutable means.  One would hope that this would infer that they'd know how to use it, and more importantly when and if it should ever be used!  But it rarely does!  The problem with triva questions is that you will either: Approve a really good developer who knows what some obscure keyword is (good) Reject a really good developer who never needed to use that keyword or is too inexperienced to know how to use it (bad) Approve a really bad developer who googled "C++ Interview Questions" and studied like hell but can't craft (very bad) Many HR departments love these kind of tests because they are short and easy to defend if a legal issue arrises on hiring decisions.  After all it's easy to say a person wasn't hired because they scored 30 out of 100 on some trivia test.  But unfortunately, you've eliminated a large part of your potential developer pool and possibly hired a few duds.  There are times I've hired candidates who knew every trivia question I could throw out them and couldn't craft.  And then there are times I've interviewed candidates who failed all my trivia but who I took a chance on who were my best finds ever.    So if not trivia, then what?  Brain teasers?  The thought is, these type of questions measure the thinking power of a candidate.  The problem is, once again, you will either: Approve a good candidate who has never heard the problem and can solve it (good) Reject a good candidate who just happens not to see the "catch" because they're nervous or it may be really obscure (bad) Approve a candidate who has studied enough interview brain teasers (once again, you can google em) to recognize the "catch" or knows the answer already (bad). Once again, you're eliminating good candidates and possibly accepting bad candidates.  In these cases, I think testing someone with brain teasers only tests their ability to answer brain teasers, not the ability to craft code. So how do we measure someone's ability to craft code?  Here's a novel idea: have them code!  Give them a computer and a compiler, or a whiteboard and a pen, or paper and pencil and have them construct a piece of code.  It just makes sense that if we're going to hire someone to code we should actually watch them code.  When they're done, we can judge them on several criteria: Correctness - does the candidate's solution accurately solve the problem proposed? Accuracy - is the candidate's solution reasonably syntactically correct? Efficiency - did the candidate write or use the more efficient data structures or algorithms for the job? Maintainability - was the candidate's code free of obfuscation and clever tricks that diminish readability? Persona - are they eager and willing or aloof and egotistical?  Will they work well within your team? It may sound simple, or it may sound crazy, but when I'm looking to hire a developer, I want to see them actually develop well-crafted code.

    Read the article

  • Analytics: Test events not showing up - how to troubleshoot?

    - by David Parks
    I've got 3 profiles: Master, Raw Data, and Test, on the Test profile I have no filters configured. I want to test using some events. I created a local HTML file as shown below to generate some test data that I could play with in Analytics. But the events never showed up in Analytics. I wonder what I might be doing wrong? Is the lack of a domain an issue maybe? <html><head></head><body>Login_popup_complete_Facebook <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-28554309-1']); _gaq.push(['_trackPageview']); _gaq.push(['_trackEvent', 'Login popup completed', 'Facebook']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </body></html>

    Read the article

  • No contact list in MSN

    - by David
    Since today I can't see my contact list en empathy IM, using the MSN protocol. I've tried uninstalling, reinstalling, erasing all config files from my computer (using ubuntu tweak and erasing the config files from my /home folder), but nothing solve the problem Time ago people have the same problem, they're solved it changing a line in a script, but that bug was solved in latest versions of empathy. I've tried to change that script, using other lines. /usr/lib/pymodules/python2.6/papyon/service/description/SingleSignOn/RequestMultipleSecurityTokens.py I've changed the line CONTACTS = ("contacts.msn.com", "MBI") by the older one: CONTACTS = ("contacts.msn.com","?fs=1&id=24000&kv=7&rn=93S9SWWw&tw=0&ver=2.1.6000.1") But this no fix the bug In advanced options I have this (in empathy account options): Server: messenger.hotmail.com Port: 1863 How can I solve this? Please help

    Read the article

  • Ubuntu 11.10 with KDE installed does not prompt for elevation for privileged ops in all apps

    - by Michael Goldshteyn
    I installed the KDE window manager on top of Ubuntu 11.10 and while I am using KDE, I do not get an elevation dialog when I try to perform tasks that require root privileges. Instead, the operations silently fail, unless I launch apps from a terminal, in which case I get errors like: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/softwareproperties/gtk/SoftwarePropertiesGtk.py", line 649, in on_isv_source_toggled self.backend.ToggleSourceUse(str(source_entry)) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 143, in __call__ **keywords) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 630, in call_blocking message, timeout) dbus.exceptions.DBusException: com.ubuntu.SoftwareProperties.PermissionDeniedByPolicy: com.ubuntu.softwareproperties.applychanges Or from the muon package manager, an error dialog such as: Does anyone know what I need to do to fix this, so that I get a proper dialog asking for elevation? Otherwise, I have to start each app that may need root privs with sudo from a terminal or gksudo. Thanks

    Read the article

  • Visual Studio Little Wonders: Quick Launch / Quick Access

    - by James Michael Hare
    Once again, in this series of posts I look at features of Visual Studio that may seem trivial, but can help improve your efficiency as a developer. The index of all my past little wonders posts can be found here. Well, my friends, this post will be a bit short because I’m in the middle of a bit of a move at the moment.  But, that said, I didn’t want to let the blog go completely silent this week, so I decided to add another Little Wonder to the list for the Visual Studio IDE. How often have you wanted to change an option or execute a command in Visual Studio, but can’t remember where the darn thing is in the menu, settings, etc.?  If so, Quick Launch in VS2012 (or Quick Access in VS2010 with the Productivity Power Tools extension) is just for you! Quick Launch / Quick Access – find a command or option quickly For those of you using Visual Studio 2012, Quick Launch is built right into the IDE at the top of the title bar, near the minimize, maximize, and close buttons: But do not despair if you are using Visual Studio 2010, you can get Quick Access from the Productivity Power Tools extension.  To do this, you can go to the extension manager: And then go to the gallery and search for Productivity Power Tools and install it.  If you don’t have VS2012 yet, then the Productivity Power Tools is the next best thing.  This extension updates VS2010 with features such as Quick Access, the Solution Navigator, searchable Add Reference Dialog, better tab wells, etc.  I highly recommend it! But back to the topic at hand!  In VS2012 Quick Launch is built into the IDE and can be accessed by clicking in the Quick Launch area of the title bar, or by pressing CTRL+Q.  If you have VS2010 with the PPT installed, though, it is called Quick Access and is accessible through View –> Quick Access: Regardless of which IDE you are using, the feature behaves mostly the same.  It allows you to search all of Visual Studio’s commands and options for a particular topic.  For example, let’s say you want to change from tabs to tabs expanded to spaces, but don’t remember where that option is buried.  You can bring up Quick Launch / Quick Access and type in “tabs”: And it brings up a list of all options on tabs, you can then choose the one appropriate to you and click on it and it will take you right there! A lot easier than diving through the options tree to find what you are looking for!  It also works on menu commands, for example if you can’t remember how to open the Output window: It shows you the menu items that will get you to the Output window, and (if applicable) the keyboard shortcuts.  Again, clicking on one of these will perform the action for you as well. There are also some tasks you can perform directly from Quick Launch / Quick Access.  For example, perhaps you are one of those people who like to have the line numbers in your editor (I do), so let’s bring up Quick Launch / Quick Access and type “line numbers”: And let’s select Turn Line Numbers On, and now our editor looks like: And Voila!  We have line numbers in VS2010.  You can do this in VS2012 too, but it takes you to the option settings instead of directly turning them off and on.  There are bound to be differences between the way the two editors organize settings and commands, but you get the point. So, as you can see, the Quick Launch / Quick Access feature in Visual Studio makes it easy to jump right to the options, commands, or tasks you are interested in without all the digging. Summary An IDE as powerful as Visual Studio has so many options and commands that it can be confusing to remember how to find and invoke them.  Quick Launch (Quick Access in VS2010 with Productivity Power Tools extension) is a quick and handy way to jump to any of these options, commands, or tasks quickly without having to remember in what menu or screen they are buried!  Technorati Tags: C#,CSharp,.NET,Little Wonders,Visual Studio,Quick Access,Quick Launch

    Read the article

  • Oracle MDM at the MDM Summit in San Francisco

    - by David Butler
    Oracle is sponsoring the Product MDM track at this year’s MDM & Data Governance San Francisco Summit. Sachin Patel, Director of Product Strategy, Product Hub Applications, at Oracle will present the keynote: Product Master Data Management for Today’s Enterprise. Here’s the abstract: Today businesses struggle to boost operational efficiency and meet new product launch deadlines due to poor and cumbersome administrative processes. One of the primary reasons enterprises are unable to achieve cohesion is due to various domain silos and fragmented product data. This adversely affects business performance including, but not limited to, excess inventories, under-leveraged procurement spend, downstream invoicing or order errors and lost sales opportunities. In this session, you will learn the key elements and business processes that are required for you to master an enterprise product record. Additionally you will gain insights into how to improve the accuracy of your data and deliver reliable and consistent product information across your enterprise. This provides a high level of confidence that business managers can achieve their goals. In this session, you will understand how adopting a Master Data Management strategy for product information can help your enterprise change course towards a more profitable, competitive and successful business. Cisco Systems will join Sachin and cover their experiences, lessons learned and best practices. If you are in the Bay Area and interested in mastering your product data for the benefit of multiple applications, business processes and analytical systems, please join us at the Hyatt, Fisherman’s Wharf this Thursday, June 30th.

    Read the article

  • Does Test Driven Development (TDD) improve Quality and Correctness? (Part 1)

    - by David V. Corbin
    Since the dawn of the computer age, various methodologies have been introduced to improve quality and reduce cost. In this posting, I will by sharing my experiences with Test Driven Development; both its benefits and limitations. To start this topic, we need to agree on what TDD is. The first is to define each of the three words as used in this context. Test - An item or action which measures something in some quantifiable form. Driven - The primary motivation or focus of a series of activities (process) Development - All phases of a software project/product from concept through delivery. The above are very simple definitions that result in the following: "TDD is a process where the primary focus is on measuring and quantifying all aspects of the creation of a (software) product." There are many places where TDD is used outside of software development, even though it is not known by this name. Consider the (conventional) education process that most of us grew up on. The focus was to get the best grades as measured by different tests. Many of these tests measured rote memorization and not understanding of the subject matter. The result of this that many people graduated with high scores but without "quality and correctness" in their ability to utilize the subject matter (of course, the flip side is true where certain people DID understand the material but were not very good at taking this type of test). Returning to software development, let us look at some common scenarios. While these items are generally applicable regardless of platform, language and tools; the remainder of this post will utilize Microsoft Visual Studio and Team Foundation Server (TFS) for examples. It should be realized that everyone does at least some aspect of TDD. At the most rudimentary level, getting a program to compile involves a "pass/fail" measurement (is the syntax valid) that drives their ability to proceed further (run the program). Other developers may create "Unit Tests" in the belief that having a test for every method/property of a class and good code coverage is the goal of TDD. These items may be helpful and even important, but really only address a small aspect of the overall effort. To see TDD in a bigger view, lets identify the various activities that are part of the Software Development LifeCycle. These are going to be presented in a Waterfall style for simplicity, but each item also occurs within Iterative methodologies such as Agile/Scrum. the key ones here are: Requirements Gathering Architecture Design Implementation Quality Assurance Can each of these items be subjected to a process which establishes metrics (quantified metrics) that reflect both the quality and correctness of each item? It should be clear that conventional Unit Tests do not apply to all of these items; at best they can verify that a local aspect (e.g. a Class/Method) of implementation matches the (test writers perspective of) the appropriate design document. So what can we do? For each of area, the goal is to create tests that are quantifiable and durable. The ability to quantify the measurements (beyond a simple pass/fail) is critical to tracking progress(eventually measuring the level of success that has been achieved) and for providing clear information on what items need to be addressed (along with the appropriate time to address them - in varying levels of detail) . Durability is important so that the test can be reapplied (ideally in an automated fashion) over the entire cycle. Returning for a moment back to our "education example", one must also be careful of how the tests are organized and how the measurements are taken. If a test is in a multiple choice format, there is a significant statistical probability that a correct answer might be the result of a random guess. Also, in many situations, having the student simply provide a final answer can obscure many important elements. For example, on a math test, having the student simply provide a numeric answer (rather than showing the methodology) may result in a complete mismatch between the process and the result. It is hard to determine which is worse: The student who makes a simple arithmetric error at one step of a long process (resulting in a wrong answer) or The student who (without providing the "workflow") uses a completely invalid approach, yet still comes up with the right number. The "Wrong Process"/"Right Answer" is probably the single biggest problem in software development. Even very simple items can suffer from this. As an example consider the following code for a "straight line" calculation....Is it correct? (for Integral Points)         int Solve(int m, int b, int x) { return m * x + b; }   Most people would respond "Yes". But let's take the question one step further... Is it correct for all possible values of m,b,x??? (no fair if you cheated by being focused on the bolded text!)  Without additional information regarding constrains on "the possible values of m,b,x" the answer must be NO, there is the risk of overflow/wraparound that will produce an incorrect result! To properly answer this question (i.e. Test the Code), one MUST be able to backtrack from the implementation through the design, and architecture all the way back to the requirements. And the requirement itself must be tested against the stakeholder(s). It is only when the bounding conditions are defined that it is possible to determine if the code is "Correct" and has "Quality". Yet, how many of us (myself included) have written such code without even thinking about it. In many canses we (think we) "know" what the bounds are, and that the code will be correct. As we all know, requirements change, "code reuse" causes implementations to be applied to different scenarios, etc. This leads directly to the types of system failures that plague so many projects. This approach to TDD is much more holistic than ones which start by focusing on the details. The fundamental concepts still apply: Each item should be tested. The test should be defined/implemented before (or concurrent with) the definition/implementation of the actual item. We also add concepts that expand the scope and alter the style by recognizing: There are many things beside "lines of code" that benefit from testing (measuring/evaluating in a formal way) Correctness and Quality can not be solely measured by "correct results" In the future parts, we will examine in greater detail some of the techniques that can be applied to each of these areas....

    Read the article

  • TFS work items tips

    - by Michael Freidgeim
    We started to use TFS to track requests using Work items. 1. Telerik's  TFS Work Item Manager (beta version for TFS 2010 is available) that could be interesting to use instead of standard VS2010, especially for someone who doesn’t want to have VS installed on their machine.(thanks to dimarzionist who pointed to the tool).See also TFS Project Dashboa 2.Visual Studio TFS work item attachments Tab I've found that Outlook emails can be dropped to TFS work item attachments. Just open TFS work item attachment tab and drag and drop Outlook email to it. Also you can copy any selected text and paste it to TFS work item attachments tab. The text will be saved as an attachment file.

    Read the article

  • Test interface implementation

    - by Michael
    I have a interface in our code base that I would like to be able to mock out for unit testing. I am writing a test implementation to allow the individual tests to be able to override the specific methods they are concerned with rather than implementing every method. I've run into a quandary over how the test implementation should behave if the test fails to override a method used by the method under test. Should I return a "non-value" (0, null) in the test implementation or throw a UnsupportedOperationException to explicitly fail the test?

    Read the article

  • Where can I find out the following info on python (coming from Ruby)

    - by Michael Durrant
    I'm coming from Ruby and Ruby on Rails to Python. Where can I find or find resources about: The command prompt, what is python's version of 'irb' django, what is a good resource for installing, using, etc. pythoncasts... is there anything like railscats, i.e. good video tutorials web sites with the api info about what version have what and which to use. info and recommendations on editors, plugins and IDE's common gotchas for newbies and good things to know at the outset scaling issues, common reasons what is the equivalent of 'gems', i.e. components I can plug in what are popular plugins for django authentication and forms similar to devise and simple_form testing, what's available, anything similar to rspec? database adapters - any preferences? framework info - is django MVC like rails? OO'yness. Is everything an object that gets send messages? Different paradign? syntax - anything like jslint for checking for well-formed code?

    Read the article

  • Analyst Firm Gives Oracle Highest Rating for Local Government CRM

    - by michael.seback
    Gartner, Inc. has given Oracle a rating of "Strong Positive," the highest possible ranking, in its report "MarketScope for Local Government CRM Products." The report compares the offerings of nine providers of CRM commercial off-the-shelf software for local government agencies. Gartner notes that a provider receiving a Strong Positive ranking must be a "provider of strategic products, services or solutions..." and recommends that "customers continue with planned investments and potential customers consider this vendor a strong choice for strategic investments." "Local governments today face tough challenges as they are tasked with reducing costs while at the same time providing citizens with services and information more quickly and efficiently than ever before. Oracle is pleased to be recognized by Gartner with a Strong Positive rating in its 'MarketScope for Local Government CRM Products' report, as we believe it reflects our commitment to helping our public sector customers meet these challenges today and in the future," said Mark Johnson, senior vice president, Oracle Public Sector. Read the highlights.

    Read the article

  • ArchBeat Link-o-Rama for 11/14/2011

    - by Bob Rhubart
    InfoQ: Developer-Driven Threat Modeling Threat modeling is critical for assessing and mitigating the security risks in software systems. In this IEEE article, author Danny Dhillon discusses a developer-driven threat modeling approach to identify threats using the dataflow diagrams. Managing the Virtual World | Philip J. Gill "The killer app for virtualization has been server consolidation," says Al Gillen, program vice president for systems software at market research firm International Data Corporation (IDC). Solaris X86 AESNI OpenSSL Engine | Dan Anderson "Having X86 AESNI hardware crypto instructions is all well and good, but how do we access it? The software is available with Solaris 11 and is used automatically if you are running Solaris x86 on a AESNI-capable processor," says Anderson. WebLogic Access Management | René van Wijk "This post is a continuation of the post WebLogic Identity Management. In this post we will present the steps involved to integrate WebLogic and Oracle Access Manager," says Oracle ACE René van Wijk. OTN Developer Days in the Nordics - Helsinki, Oslo, Stockholm, and Copenhagen OTN Developer days head for the land of the midnight sun. Podcast: Information Integration Part 2/3 In part two of a three-part program, Oracle Information Integration, Migration, and Consolidation authors Jason Williamson, Tom Laszewsk, and Marc Hebert offer examples of some of the most daunting information integration challenges. Measuring the Human Task activity in Oracle BPM | Leon Smiers Leon Smiers discusses using Oracle BPM to get answer to important questions about what's happening with business process. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ- Dec 14 Spend the day with your peers learning from experts in Cloud computing, engineered systems, and Oracle Fusion Middleware. The Heroes of Java: Michael Hüttermann | Markus Eisele Oracle ACE Director Markus Eisele interviews Java Champion Michael Hüttermann on his role, his process, and on why he uses Java.

    Read the article

  • ODI 12c - Getting up and running fast

    - by David Allan
    Here's a quick A-B-C to show you how to quickly get up and running with ODI 12c, from getting the software to creating a repository via wizard or the command line, then installing an agent for running load plans and the like. A. Get the software from OTN and install studio. Check out this viewlet here for quickly doing this. B. Create a repository using the RCU, check out this viewlet here which uses the FMW Repository Creation Utility.  You can also silently create (and drop) a repository using the command line, this is really easy. .\rcu -silent -createRepository -connectString yourhost:1521:orcl.st-users.us.oracle.com -dbUser sys -dbRole sysdba -useSamePasswordForAllSchemaUsers true -schemaPrefix X -component ODI -component IAU  -component IAU_APPEND  -component IAU_VIEWER -component OPSS < passwords.txt where the passwords file contains info such as; sysdba_passwd newschema_passwd odi_user_passwd D workreposname workrepos_passwd  You can find details about the silent use of RCU here in the FMW documentation. C. Quickly create an agent for executing load plans and the like -  there is a great OBE for this, check it out here. If you are on your laptop and just wanting as minimal an agent as possible then this link is a must. With these three steps you are ready to get to the fun stuff! Check out more OBEs here - keep on the lookout for more!

    Read the article

  • Website performance tips?

    - by Michael Schinis
    Im kind of having some troubles with the loading of my website: Here's a link to the website: link It sometimes loads fast, then when you refresh it.. most of the times, it will just keep trying to load images, and keep doing that for a minute or so, and none of the javascript will execute. I have followed most of the tips given by yahoo, except caching, which I couldn't get working properly. Does anyone know how to do proper caching of image and javascript files using htaccess? most of the code I found online won't work. Any advice whatsoever is extremely helpful. Thanks

    Read the article

  • The Application Architecture Domain

    - by Michael Glas
    I have been spending a lot of time thinking about Application Architecture in the context of EA. More specifically, as an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?There are several definitions of Application Architecture. TOGAF says “The objective here [in Application Architecture] is to define the major kinds of application system necessary to process the data and support the business”. FEA says the Application Architecture “Defines the applications needed to manage the data and support the business functions”.I agree with these definitions. They reflect what the Application Architecture domain does. However, they need to be decomposed to be practical.I find it useful to define a set of views into the Application Architecture domain. These views reflect what an EA needs to consider when working with/in the Applications Architecture domain. These viewpoints are, at a high level:Capability View: This view reflects how applications alignment with business capabilities. It is a super set of the following views when viewed in aggregate. By looking at the Application Architecture domain in terms of the business capabilities it supports, you get a good perspective on how those applications are directly supporting the business.Technology View: The technology view reflects the underlying technology that makes up the applications. Based on the number of rationalization activities I have seen (more specifically application rationalization), the phrase “complexity equals cost” drives the importance of the technology view, especially when attempting to reduce that complexity through standardization type activities. Some of the technology components to be considered are: Software: The application itself as well as the software the application relies on to function (web servers, application servers). Infrastructure: The underlying hardware and network components required by the application and supporting application software. Development: How the application is created and maintained. This encompasses development components that are part of the application itself (i.e. customizable functions), as well as bolt on development through web services, API’s, etc. The maintenance process itself also falls under this view. Integration: The interfaces that the application provides for integration as well as the integrations to other applications and data sources the application requires to function. Type: Reflects the kind of application (mash-up, 3 tiered, etc). (Note: functional type [CRM, HCM, etc.] are reflected under the capability view). Organization View: Organizations are comprised of people and those people use applications to do their jobs. Trying to define the application architecture domain without taking the organization that will use/fund/change it into consideration is like trying to design a car without thinking about who will drive it (i.e. you may end up building a formula 1 car for a family of 5 that is really looking for a minivan). This view reflects the people aspect of the application. It includes: Ownership: Who ‘owns’ the application? This will usually reflect primary funding and utilization but not always. Funding: Who funds both the acquisition/creation as well as the on-going maintenance (funding to create/change/operate)? Change: Who can/does request changes to the application and what process to the follow? Utilization: Who uses the application, how often do they use it, and how do they use it? Support: Which organization is responsible for the on-going support of the application? Information View: Whether or not you subscribe to the view that “information drives the enterprise”, it is a fact that information is critical. The management, creation, and organization of that information are primary functions of enterprise applications. This view reflects how the applications are tied to information (or at a higher level – how the Application Architecture domain relates to the Information Architecture domain). It includes: Access: The application is the mechanism by which end users access information. This could be through a primary application (i.e. CRM application), or through an information access type application (a BI application as an example). Creation: Applications create data in order to provide information to end-users. (I.e. an application creates an order to be used by an end-user as part of the fulfillment process). Consumption: Describes the data required by applications to function (i.e. a product id is required by a purchasing application to create an order. Application Service View: Organizations today are striving to be more agile. As an EA, I need to provide an architecture that supports this agility. One of the primary ways to achieve the required agility in the application architecture domain is through the use of ‘services’ (think SOA, web services, etc.). Whether it is through building applications from the ground up utilizing services, service enabling an existing application, or buying applications that are already ‘service enabled’, compartmentalizing application functions for re-use helps enable flexibility in the use of those applications in support of the required business agility. The applications service view consists of: Services: Here, I refer to the generic definition of a service “a set of related software functionalities that can be reused for different purposes, together with the policies that should control its usage”. Functions: The activities within an application that are not available / applicable for re-use. This view is helpful when identifying duplication functions between applications that are not service enabled. Delivery Model View: It is hard to talk about EA today without hearing the terms ‘cloud’ or shared services.  Organizations are looking at the ways their applications are delivered for several reasons, to reduce cost (both CAPEX and OPEX), to improve agility (time to market as an example), etc.  From an EA perspective, where/how an application is deployed has impacts on the overall enterprise architecture. From integration concerns to SLA requirements to security and compliance issues, the Enterprise Architect needs to factor in how applications are delivered when designing the Enterprise Architecture. This view reflects how applications are delivered to end-users. The delivery model view consists of different types of delivery mechanisms/deployment options for applications: Traditional: Reflects non-cloud type delivery options. The most prevalent consists of an application running on dedicated hardware (usually specific to an environment) for a single consumer. Private Cloud: The application runs on infrastructure provisioned for exclusive use by a single organization comprising multiple consumers. Public Cloud: The application runs on infrastructure provisioned for open use by the general public. Hybrid: The application is deployed on two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability. While by no means comprehensive, I find that applying these views to the application domain gives a good understanding of what an EA needs to consider when effecting changes to the Application Architecture domain.Finally, the application architecture domain is one of several architecture domains that an EA must consider when developing an overall Enterprise Architecture. The Oracle Enterprise Architecture Framework defines four Primary domains: Business Architecture, Application Architecture, Information Architecture, and Technology Architecture. Each domain links to the others either directly or indirectly at some point. Oracle links them at a high level as follows:Business Capabilities and/or Business Processes (Business Architecture), links to the Applications that enable the capability/process (Applications Architecture – COTS, Custom), links to the Information Assets managed/maintained by the Applications (Information Architecture), links to the technology infrastructure upon which all this runs (Technology Architecture - integration, security, BI/DW, DB infrastructure, deployment model). There are however, times when the EA needs to narrow focus to a particular domain for some period of time. These views help me to do just that.

    Read the article

  • Does Open Source lead to bad coding?

    - by David Conde
    I have a thought that I tried asking at SO, but didnt seem like the appropriate place. I think that source sites like Google Code, GitHub, SourceForge... have played a major role in the history of programming. However, I found that there is another bad thing to these kind of sites and that is you may just "copy" code from almost anyone, not knowing if it is good(tested) source or not. This line of thought has taken me to believe that source code websites tend to lead many developers (most likely unexperienced) to copy/paste massive amounts of code, which I find just wrong. I really dont know how to focus the question well, but basic thought would be: Is this ok? Is Open Source contributing to that or I'm just seeing ghosts... Hope people get interested because I think this is an important theme.

    Read the article

  • Windows Phone 7 DatePicker gotcha

    - by David Turner
    The Silverlight Toolkit for Windows Phone adds some great extra controls for Windows Phone 7, one gotcha that I ran into was that the DatePicker Application Bar icons don’t show up unless you include them in your project. The problem is that your DatePicker ends up looking like this: Tim Heuer mentions this in his blog post about the Silverlight Toolkit for WP7, and as he says, it is documented in the source code: So the problem is that the icons can’t be referenced from the Silverlight Toolkit Assembly, and the solution is that you have to add them to you project in the ‘well known’ / pre-defined location of a top level folder in your project called Toolkit.Content, and you must make sure to mark the icons with a Build Action of  ‘Content’ otherwise it wont work: The result is that your DatePicker will now look like this:

    Read the article

  • Going Inside the Store

    - by David Dorf
    Location was the first "killer-tech" for smartphones, and innovators have found several ways to use it. For retail, apps exist to find nearby stores, provide coupons, and give directions to the front door. But once you enter the store, location-finding ceases to work. That's because your location is usually found by finding GPS satellites in they sky, and the store's roof blocks the signal. But it won't take technology long to solve that problem. The first problem to solve is a lack of indoor maps. Navteq and others provide very accurate maps of the outdoors, enabling navigation for cars and pedestrians. Micello is building a business creating digital maps of indoor locations like malls, convention centers, office buildings. They have over 500 live maps, including maps of IKEA stores. They claim it took them only four hours to create a map of the Stanford Shopping Center in Palo Alto with its 1.4 million square feet and 140 retail stores. And within stores, retailers are producing more accurate plan-o-grams. I'm always impressed watching demos of our space planning from AVT. It uses CAD software to allow you to walk the virtual store and see products on the shelves. The second problem is being able to determine location inside the store so it can be overlayed on the map. There are several goals for this endeavor. Your smartphone might direct you straight to particular products, it might summon a sales associate to your location for immediate assistance, and it might send you coupons based on the aisle you're viewing. Companies like Nearbuy, ZuluTime, and Skyhook are working to master indoor location using a combination of GPS signals, WiFi, and cell tower positioning to calculate a location. (Skyhook calls this WPS, as depicted in the chart.) Today they can usually hit 10 meters accuracy, but that number is improving all the time. When it gets inside 3 meters some the goals mentioned earlier will be in easy reach. I for one can't wait until the time my iPhone leads me directly to the sprinkler heads in Lowes and Home Depot.

    Read the article

  • Adjust resolution in xfce4 virtualbox guest guest

    - by David
    I have Virtualbox 4.1.2_ubuntur3859 installed on an Ubuntu 11.10 host, running a guest ubuntu server 10.04 with xfce4 and xorg installed with no-install-recommends. I have installed guest additions, but the maximum resolution in the display settings is 800x600. I have read related questions: How to change resolution of the VirtualBox (Ubuntu guest and host)? Higher screen resolution in VirtualBox? upgrading VirtualBox 3.2.10 breaks my guest Ubuntu screen resolution Ubuntu as guest OS (with Vista host) stuck at 800x600 resolution but none contain the solution to my issue. Am I missing any particular packages that would allow me to change resolution? I would like to keep the machine as small as possible.

    Read the article

  • Tomboy error while tring to sync with Ubuntu one; Can anyone help?

    - by Michael Chapman
    So I'm sure you've heard the song before, but after trying to sync my notes with Ubuntu One(on 10.10 AMD64) I get "Could not synchronize notes. Check the details below and try again." Of course the problem is that there are no details and trying again doesn't help. So I ran tomboy -debug and compared my error to any thing I could find about similar problems (such as the post here) but found nothing useful. Any way here's my first error, I got this using preferencessynchronizationUbuntu_one [ERROR 21:08:42.271] Synchronization failed with the following exception: String was not recognized as a valid DateTime. at System.DateTime.Parse (System.String s, IFormatProvider provider, DateTimeStyles styles) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s, IFormatProvider provider) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.NoteInfo.ParseJson (Hyena.Json.JsonObject jsonObj) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNoteArray (Hyena.Json.JsonArray jsonArray) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNotes (System.String jsonString, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.GetNotes (Boolean includeContent, Int32 sinceRevision, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.WebSyncServer.GetNoteUpdatesSince (Int32 revision) [0x00000] in <filename unknown>:0 at Tomboy.Sync.SyncManager.SynchronizationThread () [0x00000] in <filename unknown>:0 The next thing I tried was using preferencessynchronizationtomboy_web with the default 'http://one.ubuntu.com/notes/' and got the same error plus one more. [ERROR 21:12:31.949] System.ObjectDisposedException: The object was used after being disposed. at System.Net.HttpListener.CheckDisposed () [0x00000] in <filename unknown>:0 at System.Net.HttpListener.EndGetContext (IAsyncResult asyncResult) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.WebSyncPreferencesWidget.<OnAuthButtonClicked>m__1 (IAsyncResult localResult) [0x00000] in <filename unknown>:0 [ERROR 21:13:19.245] Synchronization failed with the following exception: String was not recognized as a valid DateTime. at System.DateTime.Parse (System.String s, IFormatProvider provider, DateTimeStyles styles) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s, IFormatProvider provider) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.NoteInfo.ParseJson (Hyena.Json.JsonObject jsonObj) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNoteArray (Hyena.Json.JsonArray jsonArray) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNotes (System.String jsonString, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.GetNotes (Boolean includeContent, Int32 sinceRevision, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.WebSyncServer.GetNoteUpdatesSince (Int32 revision) [0x00000] in <filename unknown>:0 at Tomboy.Sync.SyncManager.SynchronizationThread () [0x00000] in <filename unknown>:0 I Have also tried removing then re-adding My computer from my Ubuntu One account, but that did not help either. The only other Thing I have noticed is that under systempreferencesubuntu one services, "Notes" is not listed as a service. I don't know if this is normal or not. Thanks for any help and please let me know if anything is confusing.

    Read the article

  • Testing on Device Other Than the Known Brand Question (Local and Imported Phone Question)

    - by David Dimalanta
    I have a question. When testing a device by using Eclipse, it's easy to install and add device software with these specific brands commonly used in game testing like Samsung, Google, T-Mobile, and HTC; according to the Android Developers website. What if I'm using other brands that runs on Android to test the program via Eclipse (i.e. MyPhone, Starmobile), what should I look for to download in order to enable testing phones that those brands are using other than the brands that are known and commonly used: model number or simply brand? Here's some examples of these brands other than the brands we've known that runs on Android: Starmobile Engage 7 (http://www.lazada.com.ph/Starmobile-Engage-7-Android-40-4GB-with-Wi-Fi-Black-Starmobile-Mercury-B201-COMBO-39833.html/) My|Phone A898 Duo (http://www.myphone.com.ph/#!a898-duo/c1yt) Also, take note that I'm a Filipino programmer working at the Philippines to test our local smartphones for the created Android game or app. Hope you can understand me for my help.

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • Loading sound in XNA without the Content Pipeline

    - by David Gouveia
    I'm working on a "Game Maker"-type of application for Windows where the user imports his own assets to be used in the game. I need to be able to load this content at runtime on the engine side. However I don't want the user to have to install anything more than the XNA runtime, so calling the content pipeline at runtime is out. For images I'm doing fine using Texture2D.FromStream. I've also noticed that XNA 4.0 added a FromStream method to the SoundEffect class but it only accepts PCM wave files. I'd like to support more than wave files though, at least MP3. Any recommendations? Perhaps some C# library that would do the decoding to PCM wave format.

    Read the article

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • How to Handle frame rates and synchronizing screen repaints

    - by David Kroukamp
    I would first off say sorry if the title is worded incorrectly. Okay now let me give the scenario I'm creating a 2 player fighting game, An average battle will include a Map (moving/still) and 2 characters (which are rendered by redrawing a varying amount of sprites one after the other). Now at the moment I have a single game loop limiting me to a set number of frames per second (using Java): Timer timer = new Timer(0, new AbstractAction() { @Override public void actionPerformed(ActionEvent e) { long beginTime; //The time when the cycle begun long timeDiff; //The time it took for the cycle to execute int sleepTime; //ms to sleep (< 0 if we're behind) int fps = 1000 / 40; beginTime = System.nanoTime() / 1000000; //execute loop to update check collisions and draw gameLoop(); //Calculate how long did the cycle take timeDiff = System.nanoTime() / 1000000 - beginTime; //Calculate sleep time sleepTime = fps - (int) (timeDiff); if (sleepTime > 0) {//If sleepTime > 0 we're OK ((Timer)e.getSource()).setDelay(sleepTime); } } }); timer.start(); in gameLoop() characters are drawn to the screen ( a character holds an array of images which consists of their current sprites) every gameLoop() call will change the characters current sprite to the next and loop if the end is reached. But as you can imagine if a sprite is only 3 images in length than calling gameLoop() 40 times will cause the characters movement to be drawn 40/3=13 times. This causes a few minor anomilies in the sprited for some charcters So my question is how would I go about delivering a set amount of frames per second in when I have 2 characters on screen with varying amount of sprites?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >