Search Results

Search found 93609 results on 3745 pages for 'one terrorist'.

Page 636/3745 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • Combining template method with strategy

    - by Mekswoll
    An assignment in my software engineering class is to design an application which can play different forms a particular game. The game in question is Mancala, some of these games are called Wari or Kalah. These games differ in some aspects but for my question it's only important to know that the games could differ in the following: The way in which the result of a move is handled The way in which the end of the game is determined The way in which the winner is determined The first thing that came to my mind to design this was to use the strategy pattern, I have a variation in algorithms (the actual rules of the game). The design could look like this: I then thought to myself that in the game of Mancala and Wari the way the winner is determined is exactly the same and the code would be duplicated. I don't think this is by definition a violation of the 'one rule, one place' or DRY principle seeing as a change in rules for Mancala wouldn't automatically mean that rule should be changed in Wari as well. Nevertheless from the feedback I got from my professor I got the impression to find a different design. I then came up with this: Each game (Mancala, Wari, Kalah, ...) would just have attribute of the type of each rule's interface, i.e. WinnerDeterminer and if there's a Mancala 2.0 version which is the same as Mancala 1.0 except for how the winner is determined it can just use the Mancala versions. I think the implementation of these rules as a strategy pattern is certainly valid. But the real problem comes when I want to design it further. In reading about the template method pattern I immediately thought it could be applied to this problem. The actions that are done when a user makes a move are always the same, and in the same order, namely: deposit stones in holes (this is the same for all games, so would be implemented in the template method itself) determine the result of the move determine if the game has finished because of the previous move if the game has finished, determine who has won Those three last steps are all in my strategy pattern described above. I'm having a lot of trouble combining these two. One possible solution I found would be to abandon the strategy pattern and do the following: I don't really see the design difference between the strategy pattern and this? But I am certain I need to use a template method (although I was just as sure about having to use a strategy pattern). I also can't determine who would be responsible for creating the TurnTemplate object, whereas with the strategy pattern I feel I have families of objects (the three rules) which I could easily create using an abstract factory pattern. I would then have a MancalaRuleFactory, WariRuleFactory, etc. and they would create the correct instances of the rules and hand me back a RuleSet object. Let's say that I use the strategy + abstract factory pattern and I have a RuleSet object which has algorithms for the three rules in it. The only way I feel I can still use the template method pattern with this is to pass this RuleSet object to my TurnTemplate. The 'problem' that then surfaces is that I would never need my concrete implementations of the TurnTemplate, these classes would become obsolete. In my protected methods in the TurnTemplate I could just call ruleSet.determineWinner(). As a consequence, the TurnTemplate class would no longer be abstract but would have to become concrete, is it then still a template method pattern? To summarize, am I thinking in the right way or am I missing something easy? If I'm on the right track, how do I combine a strategy pattern and a template method pattern? This is part of a homework assignment but I'm not looking to be gifted the answer, I have deliberately been very verbose in my question to show that I have thought about it before coming here to ask a question

    Read the article

  • Build Your Own CE6 Kernel

    - by Kate Moss' Big Fan
    The Share Source Program in Windows CE provides many modules in %_WINCEROOT%\Private\ tree, and the kernel is one of them! Although it is not full source of kernel but it is good enough for tracing it, even tweak the kernel. Tracing the kernel and see how it works is lots of fun, but it is fascinated to modify and verify the change you made. So first comes first, where is the source of kernel? It's in your %_WINCEROOT%\private\winceos\COREOS\nk\ And next question will be "How do I build it?", Some of you may say just "build -c" there and it should be good. If you are the owner of kernel and got full source, that is definitely the right answer, but none of them are applied to our case though. So what should I do? Let's dig deeper into the coreos\nk folder, there are a couples of subfolder, CELOG, KDSTUB, KERNEL and etc. KERNEL\ is the main component of kernel.dll, in the other word, most of the modify to kernel is going to happen here. And the good thing is, you could "build -c" in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ with no error at all. But before doing that, remember to backup eveything you are going to modify, including the source and binaries; remember, this is not something belong to you, and if you didn't restore them back later, it could end up confuse the subsequence QFE updates! Here is the steps Backup the source code, I will suggest the whole %_WINCEROOT%\private\winceos\COREOS\nk\ Backup the binaries in common\oak\lib\, and again if you are not sure which files, backup the whole %_WINCEROOT%\common\oak\lib\ is the safest way. Do whatever modification you want in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ build -c in %_WINCEROOT%\private\winceos\COREOS\nk\kernel If everything went well so far, you should get a new nkmain.lib,nkmain.pdb, nkprmain.lib and nkprmain.pdb in %_WINCEROOT%\public\common\oak\lib\%_TGTCPU%\%WINCEDEBUG%\ Basically, you just rebuild your new kernel, the rest is to "blddemo clean -q" to have your new kernel SYSGEN'd and include in your OS Image. Or just "set WINCEREL=1" then "sysgen -p common nk nkprof" and "makeimg" if you can't wait another minutes for "blddemo clean -q" Tat sounds good, but some of you may not like the idea to alter any code in private folder, and not to mention how annoying to backup/restore files every time. Better idea? Yes, Microsoft provides a tool SYSGEN_CAPTURE (http://msdn.microsoft.com/en-us/library/ee504678.aspx for detail and usage) to creates Sources files for public drivers that you want to modify and build in your platform directory. In fact, not only public drivers, virtually anything in the %_WINCEROOT%\public\<project name>\cesysgen\makefile can be captured, and of course including kernel. So I am going to introduce a second way to build your own kernel by using SYSGEN_CAPTURE tool. Again the steps Create a folder in your BSP for building kernel, says %_TARGETPLATROOT%\SRC\Kernel. Use "SYSGEN_CAPTURE -p common nk" and then you will get a SOURCES.KERN, you could also "SYSGEN_CAPTURE -p common nkprof" to generate profiler enabled kernel. rename the SOURCE.KERN to SOURCES and copy one of the sample makefile into your kernel directory. For example the one in PRIVATE\WINCEOS\COREOS\NK\KERNEL\NKNORMAL. Copy the source files you want to modify from private\winceos\coreos\nk\kernel\ into your kernel directory. Modifying the SOURCES= macro to the source files you addes in step 4. For example, if you copied the vm.c, it is going to be SOURCES=vm.c Refer to the private\winceos\COREOS\nk\kernel\sources.inc and add macro defines and proper include path in your SOURCES file. "set WINCEREL=1", "build -c" in your kernel directory and "makeimg", voila! Here is an example for the MACROS you need to add in x86 Here are the macros for x86 CDEFINES=$(CDEFINES) -DIN_KERNEL -DWINCEMACRO -DKERN_CORE # Machine independent defines CDEFINES=$(CDEFINES) -DDBGSUPPORT _COREOSROOT=$(_WINCEROOT)\private\winceos\coreos INCLUDES=$(_COREOSROOT)\inc;$(_COREOSROOT)\nk\inc !IFDEF DP_SETTINGS CDEFINES=$(CDEFINES) -DDP_SETTINGS=$(DP_SETTINGS) !ENDIF ASM_SAFESEH=1 CDEFINES=$(CDEFINES) -Gs100000 -DENCODE_GS_COOKIE

    Read the article

  • Pondering New Technology

    - by MOSSLover
    So I have been standing at the end of a fork for a year peering down a corner looking at this way and that way trying to figure out where I fit in.  I was so enthusiastic and excited about Silverlight when it came out.  It was this amazing awesome technology that had this really cool animation and webcam/multimedia piece.  I thought if I put my money on Silverlight it’s going somewhere, then HTML 5 came out and the wind shifted.  I realized times were changing. I have been working with web technologies since I was 15 years old.  Playing with html and javascript and even css back when it first came out.  In tech years 15 years is forever.  Things change so quickly and so often.  So I guess the question is where am I heading?  The answer is mobile technology.  For some reason I was resisting change and I have no idea why.  I guess I really wanted to see more than one player.  I didn’t quite feel that Microsoft was ready with Windows Phone 7.  It was a great start, but it just didn’t feel like they went all the way.  Now with Windows 8 it feels like they are at version 2.0.  It’s like hitting Silverlight 2.0 where they finally added the .Net bits.  The path is paved, but we don’t know where it’s leading.  Then we had 3.0, 4.0, and 5.0 to mature the technology into what it is today (man I’m hoping they are going to roll some of the cool bits into other tech if they don’t exist). Anyway, I’m on board, but I’m not buying a Windows Tablet just yet.  I was hoping for a 7 inch screen from Apple around $300 or just above and a 7 inch screen from the MS side around the same price.  What I got was the Apple side, but nothing from Microsoft.  I was pretty disappointed with the $500 market price on the RT version.  I realized Microsoft is close, but not quite where Apple is today.  Yes the devices have Office that they are offering, but the sticker is just too much for a first generation device.  If you guys remember correctly the first generation iPad was quite expensive.  I guess for a 1st generation device $500 is pretty good. So I guess what I’m trying to say is that I am shifting my focus entirely away from Silverlight and more towards mobile.  I will be doing a lot of postings on iOS, Windows 8, and Windows Phone 8 with SharePoint 2013.  Since I don’t have a tablet and don’t foresee myself buying one just yet it might be mostly on the phones for right now.  I want to do a bunch of testing on various devices on what needs to be done in apps on each device.  I might add a bit on porting code from one to the other.  I think it’s going to be a lot of fun and make things flow a little better for me.  In a way it’s kind of like Star Trek 6 where they talk about the Undiscovered Country.  I’m going to jump forward completely and see where I land. Technorati Tags: SharePoint 2013,Mobile,Windows 8,iOS

    Read the article

  • Detecting Process Shutdown/Startup Events through ActivationAgents

    - by Ramkumar Menon
    @10g - This post is motivated by one of my close friends and colleague - who wanted to proactively know when a BPEL process shuts down/re-activates. This typically happens when you have a BPEL Process that has an inbound polling adapter, when the adapter loses connectivity to the source system. Or whatever causes it. One valuable suggestion came in from one of my colleagues - he suggested I write my own ActivationAgent to do the job. Well, it really worked. Here is a sample ActivationAgent that you can use. There are few methods you need to override from BaseActivationAgent, and you are on your way to receiving notifications/what not, whenever the shutdown/startup events occur. In the example below, I am retrieving the emailAddress property [that is specified in your bpel.xml activationAgent section] and use that to send out an email notification on the activation agent initialization. You could choose to do different things. But bottomline is that you can use the below-mentioned API to access the very same properties that you specify in the bpel.xml. package com.adapter.custom.activation; import com.collaxa.cube.activation.BaseActivationAgent; import com.collaxa.cube.engine.ICubeContext; import com.oracle.bpel.client.BPELProcessId; import java.util.Date; import java.util.Properties; public class LifecycleManagerActivationAgent extends BaseActivationAgent { public BPELProcessId getBPELProcessId() { return super.getBPELProcessId(); } private void handleInit() throws Exception { //Write initialization code here System.err.println("Entered initialization code...."); //e.g. String emailAddress = getActivationAgentDescriptor().getPropertyValue(emailAddress); //send an email sendEmail(emailAddress); } private void handleLoad() throws Exception { //Write load code here System.err.println("Entered load code...."); } private void handleUnload() throws Exception { //Write unload code here System.err.println("Entered unload code...."); } private void handleUninit() throws Exception { //Write uninitialization code here System.err.println("Entered uninitialization code...."); } public void init(ICubeContext icubecontext) throws Exception { super.init(icubecontext); System.err.println("Initializing LifecycleManager Activation Agent ....."); handleInit(); } public void unload(ICubeContext icubecontext) throws Exception { super.unload(icubecontext); System.err.println("Unloading LifecycleManager Activation Agent ....."); handleUnload(); } public void uninit(ICubeContext icubecontext) throws Exception{ super.uninit(icubecontext); System.err.println("Uninitializing LifecycleManager Activation Agent ....."); handleUninit(); } public String getName() { return "Lifecyclemanageractivationagent"; } public void onStateChanged(int i, ICubeContext iCubeContext) { } public void onLifeCycleChanged(int i, ICubeContext iCubeContext) { } public void onUndeployed(ICubeContext iCubeContext) { } public void onServerShutdown() { } } Once you compile this code, generate a jar file and ensure you add it to the server startup classpath. The library is ready for use after the server restarts. To use this activationAgent, add an additional activationAgent entry in the bpel.xml for the BPEL Process that you wish to monitor. After you deploy the process, the ActivationAgent object will be called back whenever the events mentioned in the overridden methods are raised. [init(), load(), unload(), uninit()]. Subsequently, your custom code is executed. Sample bpel.xml illustrating activationAgent definition and property definition. <?xml version="1.0" encoding="UTF-8"? <BPELSuitcase timestamp="1291943469921" revision="1.0" <BPELProcess wsdlPort="{http://xmlns.oracle.com/BPELTest}BPELTestPort" src="BPELTest.bpel" wsdlService="{http://xmlns.oracle.com/BPELTest}BPELTest" id="BPELTest" <partnerLinkBindings <partnerLinkBinding name="client" <property name="wsdlLocation"BPELTest.wsdl</property </partnerLinkBinding <partnerLinkBinding name="test" <property name="wsdlLocation"test.wsdl</property </partnerLinkBinding </partnerLinkBindings <activationAgents <activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="test" <property name="portType"Read_ptt</property </activationAgent <activationAgent className="com.oracle.bpel.activation.LifecycleManagerActivationAgent" partnerLink="test" <property name="emailAddress"[email protected]</property </activationAgent </activationAgents </BPELProcess </BPELSuitcase em

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Remove Ubuntu or XP from the Windows 7 Boot Menu

    - by Trevor Bekolay
    If you’ve ever used a dual-boot system and then removed one of the operating systems, it can still show up in Windows 7’s boot menu. We’ll show you how to get rid of old entries and speed up the boot process. To edit the boot menu, we will use a program called bcdedit that’s included with Windows 7. There are some third-party graphical applications that will edit the menu, but we prefer to use built-in applications when we can. First, we need to open a command prompt with Administrator privileges. Open the start menu and type cmd into the search box. Right click on the cmd program that shows up, and select Run as administrator. Alternatively, if you’ve disabled the search box, you can find the command prompt in All Programs > Accessories. In the command prompt, type in bcdedit and press enter. A list of the boot menu entries will appear. Find the entry that you would like to delete – in our case, this is the last one, with the description of “Ubuntu”. What we need is the long sequence of characters marked as the identifier. Rather than type it out, we will copy it to be pasted later. Right-click somewhere in the command prompt window and select Mark. By clicking the left mouse button and dragging over the appropriate text, select the identifier for the entry you want to delete, including the left and right curly braces on either end. Press the Enter button. This will copy the text to the clipboard. In the command prompt, type in: bcdedit /delete and then right-click somewhere in the command prompt window and select Paste. Press Enter to input the now completed command. The boot menu entry will now be deleted. Type in bcdedit again to confirm that the offending entry is now gone from the list. If you reboot your machine now, you will notice that the boot menu does not even come up, because there is only one entry in the list (unless you had more than two entries to begin with). You’ve shaved a few seconds off of the boot process! Not to mention the added effort of pressing the enter button. There’s a lot more that you can do with bcdedit, like change the description of boot menu entries, create new entries, and much more. For a list of what you can do with bcdedit, type the following into the Command Window. bcdedit /help While there are third-party GUI solutions for accomplishing the same thing, using this method will save you time by not having to go through the extra steps of installing an extra program. Similar Articles Productive Geek Tips Reinstall Ubuntu Grub Bootloader After Windows Wipes it OutClean Up Ubuntu Grub Boot Menu After UpgradesHow To Switch to Console Mode for Ubuntu VMware GuestSet Windows as Default OS when Dual Booting UbuntuChange the GRUB Menu Timeout on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3

    Read the article

  • Employee Info Starter Kit: Project Mission

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is an open source ASP.NET project template that is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, it illustrates how to utilize Microsoft ASP.NET 4.0, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule. where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. User Stories The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee record Update an existing employee record Delete existing employee records Key Technology Areas ASP.NET 4.0 Entity Framework 4.0 T-4 Template Visual Studio 2010 Architectural Objective There is no universal architecture which can be considered as the best for all sorts of applications around the world. Based on requirements, constraints, environment, application architecture can differ from one to another. Trade-off factors are one of the important considerations while deciding a particular architectural solution. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. “Productivity” as the architectural objective typically also includes other trade-off factors as well as, such as testability, flexibility, performance etc. Fortunately Microsoft .NET Framework 4.0 and Visual Studio 2010 includes lots of great features that have been implemented cleverly in this project to reduce these trade-off factors in the minimum level. Why Employee Info Starter Kit is Not a Framework? Application frameworks are really great for productivity, some of which are really unavoidable in this modern age. However relying too many frameworks may overkill a project, as frameworks are typically designed to serve wide range of different usage and are less customizable or editable. On the other hand having implementation patterns can be useful for developers, as it enables them to adjust application on demand. Employee Info Starter Kit provides hundreds of “connected” snippets and implementation patterns to demonstrate problem solutions in actual production environment. It also includes Visual Studio T-4 templates that generate thousands lines of data access and business logic layer repetitive codes in literally few seconds on the fly, which are fully mock testable due to language support for partial methods and latest support for mock testing in Entity Framework. Why Employee Info Starter Kit is Different than Other Open-source Web Applications? Software development is one of the rapid growing industries around the globe, where the technology is being updated very frequently to adapt greater challenges over time. There are literally thousands of community web sites, blogs and forums that are dedicated to provide support to adapt new technologies. While some are really great to enable learning new technologies quickly, in most cases they are either too “simple and brief” to be used in real world scenarios or too “complex and detailed” which are typically focused to achieve a product goal (such as CMS, e-Commerce etc) from "end user" perspective and have a long duration learning curve with respect to the corresponding technology. Employee Info Starter Kit, as a web project, is basically "developer" oriented which actually considers a hybrid approach as “simple and detailed”, where a simple domain has been considered to intentionally illustrate most of the architectural and implementation challenges faced by web application developers so that anyone can dive into deep into the corresponding new technology or concept quickly. Roadmap Since its first release by 2008 in MSDN Code Gallery, Employee Info Starter Kit gained a huge popularity in ASP.NET community and had 1, 50,000+ downloads afterwards. Being encouraged with this great response, we have a strong commitment for the community to provide support for it with respect to latest technologies continuously. Currently hosted in Codeplex, this community driven project is planned to have a wide range of individual editions, each of which will be focused on a selected application architecture, framework or platform, such as ASP.NET Webform, ASP.NET Dynamic Data, ASP.NET MVC, jQuery Ajax (RIA), Silverlight (RIA), Azure Service Platform (Cloud), Visual Studio Automated Test etc. See here for full list of current and future editions.

    Read the article

  • Oracle and Partners release CAMP specification for PaaS Management

    - by macoracle
    Cloud Application Management for Platforms The public release of the Cloud Application Management for Platforms (CAMP) specification, an initial draft of what is expected to become an industry standard self service interface specification for Platform as a Service (PaaS) management, represents a significant milestone in cloud standards development. Created by several players in the emerging cloud industry, including Oracle, the specification is being submitted to the OASIS standards organization (draft charter) where it will be finalized in an open development process. CAMP is targeted at application developers and deployers for self service management of their application on a Platform-as-a-Service cloud. It is closely aligned with the application development process where applications are typically developed in an Application Development Environment (ADE) and then deployed into a private or public platform cloud. CAMP standardizes the model behind an application’s dependencies on platform components and provides a standardized format for moving applications between the ADE and the cloud, and if and when desirable, between clouds. Once an application is deployed, CAMP provides users with a standardized self service interface to the PaaS offering, allowing the cloud consumer to manage the lifecycle of the application on that platform and the use of the underlying platform services. The CAMP interface includes a RESTful binding of the CAMP model onto the standard HTTP protocol, using JSON as the encoding for the model resources. The model for CAMP includes resources that represent the Application, its Components and any Platform Components that they depend on. It's important PaaS Cloud consumers understand that for a PaaS cloud, these are the abstractions that the user would prefer to work with, not Virtual Machines and the various resources such as compute power, storage and networking. PaaS cloud consumers would also not like to become system administrators for the infrastructure that is hosting their applications and component services. CAMP works on this more abstract level, and yet still accommodates platforms that are built using an underlying infrastructure cloud. With CAMP, it is up to the cloud provider whether or not this underlying infrastructure is exposed to the consumer. One major challenge addressed by the CAMP specification is that of ensuring that application deployment on a new platform is as seamless and error free as possible. This becomes even more difficult when the application may have been developed for a different platform and is now moving to a new one. In CAMP this is accomplished by matching the requirements of the application and its components to the specific capabilities of the underlying platform. This needs to be done regardless of whether there are existing pools of virtualized platform resources (such as a database pool) which are provisioned(on the basis of a schema for example), or whether the platform component is really just a set of virtual machines drawn from an infrastructure pool. The interoperability between platform clouds that CAMP offers means that a CAMP client such as an ADE can target multiple clouds with a single common interface. Applications can even be spread across multiple platform clouds and then managed without needing to create a specialized adapter to manage the components running in each cloud. The development of CAMP has been an effort by a small set of companies, but there are significant advantages to this approach. For example, the way that each of these companies creates their platforms is different enough, to ensure that CAMP can cover a wide range of actual deployments. CAMP is now entering the next phase of development under the guidance of an open standards organization, OASIS, which will likely broaden it’s capabilities. We hope is to keep it concise and minimal, however, to ease implementation and adoption. Over time there will be many different types of platform components that applications can use and which need management. CAMP at this point only includes one example of this (in an appendix) – DataBase as a Service. I am looking forward to the start of the CAMP Technical Committee in OASIS and will do my best to ensure a successful development process. Hope to see you there.

    Read the article

  • SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB

    - by Pinal Dave
    Let us take a look at the application you own at your business. If you pay attention to the underlying database for that application you will be amazed. Every successful business these days processes way more data than they used to process before. The number of transactions and the amount of data is growing at an exponential rate. Every single day there is way more data to process than before. Big data is no longer a concept; it is now turning into reality. If you look around there are so many different big data solutions and it can be a quite difficult task to figure out where to begin. Personally, I have been experimenting with a lot of different solutions which allow my database to scale immediately without much hassle while maintaining optimal database performance.  There are for sure some solutions out there, but for many I even have to learn their specific language and there is a lot of new exploration to do. Honestly, what I prefer is a product, which works with the language I know (SQL) and follows all the RDBMS concepts which I am familiar with (ACID etc.). NuoDB is one such solution.  It is an operational NewSQL database built on a patented emergent architecture with full support for SQL and ACID guarantees. In this blog post, I will explore how one can download and install NuoDB database. Step 1: Follow me and go to the NuoDB download page. Simply fill out the form, accept the online license agreement, and you will be taken directly to a page where you can select any platform you prefer to install NuoDB. In my example below, I select the Windows 64-bit platform as it is one of the most popular NuoDB platforms. (You can also run NuoDB on Amazon Web Services but I prefer to install it on my local machine for the purposes of this blog). Step 2: Once you have downloaded the NuoDB installer, double click on it to install it on the Windows platform. Here is the enlarged the icon of the installer. Step 3: Follow the wizard installation, as it is pretty straight forward and easy to do so. I have selected all the options to install as the overall installation is very simple and it does not take up much space. I have installed it on my C drive but you can select your preferred drive. It is quite possible that if you do not have 64 bit Java, it will throw following error. If you face following error, I suggest you to download 64-bit Java from here. Make sure that you download 64-bit Java from following link: http://java.com/en/download/manual.jsp If already have Java 64-bit installed, you can continue with the installation as described in following image. Otherwise, install Java and start from with Step 1. As in my case, I already have 64-bit Java installed – and you won’t believe me when I say that the entire installation of NuoDB only took me around 90 seconds. Click on Finish to end to exit the installation. Step 4: Once the installation is successful, NuoDB will automatically open the following two tabs – Console and DevCenter — in your preferred browser. On the Console tab you can explore various components of the NuoDB solution, e.g. QuickStart, Admin, Explorer, Storefront and Samples. We will see various components and their usage in future blog posts. If you follow these steps in this post, which I have followed to install NuoDB, you will agree that the installation of NuoDB is extremely smooth and it was indeed a pleasure to install a database product with such ease. If you have installed other database products in the past, you will absolutely agree with me. So download NuoDB and install it today, and in tomorrow’s blog post I will take the installation to the next level. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • What Counts For a DBA: Ego

    - by Louis Davidson
    Leaving aside, for a second, Freud’s psychoanalytical definitions, the term “ego” generally refers to a person’s sense of self, and their self-esteem. In casual usage, however, it usually appears in the adjectival form, “egotistical” (most often followed by “jerk”). You don’t need to be a jerk to be a DBA; humility is important. However, ego is important too. A good DBA needs a certain degree of self-esteem…a belief and pride in what he or she can do better than anyone else can. The ideal DBA needs to be humble enough to admit when they are wrong but egotistical enough to know when they are right, and to stand up for that knowledge and make their voice heard. In most organizations, the DBA team is seriously outnumbered by headstrong developers and clock driven managers, and “great” DBAs will often be outnumbered by…well…the not so great. In order to be heard in this environment, a DBA will not only need to be very skilled, but will also need a healthy dose of ego. As Freud might have put it, the unconscious desire of the DBA (the id) is for iron-fist control over their databases, and code that runs in them. However, the ego moderates this desire, seeking to “satisfy the id in realistic ways that, in the long term, bring benefit rather than grief“. In other words, the ego understands the need to exert a measure of control and self-belief, but also to tolerate and play nicely with developers and other DBAs. The trick, naturally, is learning how to be heard when it is important, but also to make everyone around you welcome that input, even when you have to be bold to make the “I know what I am talking about, and you…well…not so much” decisions. Consider a baseball team, bottom of the ninth inning of the championship game, man on first and down one run. Almost anyone on that team will have the ability to hit a home run, but only one or two will have the iron belief that they can pull it off in this critical, end-game situation. The player you need in this situation is the one who has passionately gone the extra mile preparing for just this moment, is bursting at the seams with self-confidence, and can look the coach in the eye and state, boldly, “Put me in, I am your best bet“. Likewise, on those occasions when high customer demand coincides with copious system errors, and panic is bubbling just beneath the surface, you don’t need the minimally qualified support person, armed with the “reboot and hope” technique (though that sometimes works!). You need the DBA who steps up and says, “Put me in” and has the skill and tenacity to back up those words and to fix the pinpoint and fix the problem, whatever it takes, while keeping customers and managers happy. Of course, the egotistical DBA will happily spend hours telling you how great they are at their job, and how brilliantly they put out a previous fire, and this is no guarantee that they can deliver. However, if an otherwise-humble DBA looks you in the eye and says, “I can do it”, then hear them out. Sometimes, this burst of ego will be exactly what’s required.

    Read the article

  • Overview of Microsoft SQL Server 2008 Upgrade Advisor

    - by Akshay Deep Lamba
    Problem Like most organizations, we are planning to upgrade our database server from SQL Server 2005 to SQL Server 2008. I would like to know is there an easy way to know in advance what kind of issues one may encounter when upgrading to a newer version of SQL Server? One way of doing this is to use the Microsoft SQL Server 2008 Upgrade Advisor to plan for upgrades from SQL Server 2000 or SQL Server 2005. In this tip we will take a look at how one can use the SQL Server 2008 Upgrade Advisor to identify potential issues before the upgrade. Solution SQL Server 2008 Upgrade Advisor is a free tool designed by Microsoft to identify potential issues before upgrading your environment to a newer version of SQL Server. Below are prerequisites which need to be installed before installing the Microsoft SQL Server 2008 Upgrade Advisor. Prerequisites for Microsoft SQL Server 2008 Upgrade Advisor .Net Framework 2.0 or a higher version Windows Installer 4.5 or a higher version Windows Server 2003 SP 1 or a higher version, Windows Server 2008, Windows XP SP2 or a higher version, Windows Vista Download SQL Server 2008 Upgrade Advisor You can download SQL Server 2008 Upgrade Advisor from the following link. Once you have successfully installed Upgrade Advisor follow the below steps to see how you can use this tool to identify potential issues before upgrading your environment. 1. Click Start -> Programs -> Microsoft SQL Server 2008 -> SQL Server 2008 Upgrade Advisor. 2. Click Launch Upgrade Advisor Analysis Wizard as highlighted below to open the wizard. 2. On the wizard welcome screen click Next to continue. 3. In SQL Server Components screen, enter the Server Name and click the Detect button to identify components which need to be analyzed and then click Next to continue with the wizard. 4. In Connection Parameters screen choose Instance Name, Authentication and then click Next to continue with the wizard. 5. In SQL Server Parameters wizard screen select the Databases which you want to analysis, trace files if any and SQL batch files if any.  Then click Next to continue with the wizard. 6. In Reporting Services Parameters screen you can specify the Reporting Server Instance name and then click next to continue with the wizard. 7. In Analysis Services Parameters screen you can specify an Analysis Server Instance name and then click Next to continue with the wizard. 8. In Confirm Upgrade Advisor Settings screen you will be able to see a quick summary of the options which you have selected so far. Click Run to start the analysis. 9. In Upgrade Advisor Progress screen you will be able to see the progress of the analysis. Basically, the upgrade advisor runs predefined rules which will help to identify potential issues that can affect your environment once you upgrade your server from a lower version of SQL Server to SQL Server 2008. 10. In the below snippet you can see that Upgrade Advisor has completed the analysis of SQL Server, Analysis Services and Reporting Services. To see the output click the Launch Report button at the bottom of the wizard screen. 11. In View Report screen you can see a summary of issues which can affect you once you upgrade. To learn more about each issue you can expand the issue and read the detailed description as shown in the below snippet.

    Read the article

  • 24 Hours of PASS: Whine, Whine, Whine

    - by Most Valuable Yak (Rob Volk)
    24 Hours of Pass (or 24HOP) is a great program offered by PASS to provide free, online training for anyone who wants to learn more about SQL Server.  They routinely have the best SQL Server presenters available for these sessions, and attract hundreds, perhaps even a thousand attendees from around the world.  This is definitely one of the best things they've started doing in the past few years, and every session I've attended has been excellent. So why am I so grumpy about it? I'm not really, pretty much everything here is a minor annoyance that I can deal with.  However since they're so minor they seem to be things that can be easily corrected and would make the process much better. First off, this is my biggest gripe, the registration page: https://www323.livemeeting.com/lrs/8000181573/Registration.aspx?pageName=lj6378f4fhf5hpdm What grinds my gears about this?  I have to scroll alllllllllllllllllllllllllllll the way to bottom to actually register for the sessions.  This wouldn't be so bad except all the details of the session, including the presenter, is in a separate list at the top.  Both lists contain info the other does not, and scrolling between them to determine "Should I make time to listen to this?  Who is speaking at this time anyway?" is really unnecessary. My preference would be to keep the top list and add the checkboxes and schedule info in separate columns.  This is a full-width design, so there's plenty of space for this data, which is pretty small anyway.  The other huge benefit is halving the size of the page, which improves performance and lowers bandwidth usage considerably.  And if you know HTML/ASP.Net, and you view the page source, you can find PLENTY of other things that can be reduced even further.  (not just viewstate) One nice thing that PASS does is send iCal reminders to your email address so you can accept them to your calendar.  Again, they leave off the presenter in the appointment details, while still duplicating the meeting title in the body.  Sometimes I make decisions based on speaker rather than content (Natalie Portman is reading the Yellow Pages??? I'M THERE!) and having the speaker in the iCal is helpful. Next minor annoyances are the necessity for providing a company name, and the survey questions.  I know PASS needs to market themselves effectively, and they need information to do that, and since this is a free event it's really not worth complaining about, but why ask the survey question twice? (once at registration, once again when joining the LiveMeeting)  Same thing for the company name.  All of this should be tied to email address, so that's all I should need to enter when joining the LiveMeeting. The last one is also minor, but it irks me in this day and age of multiple browsers and the decline of Internet Explorer as a dominant platform.  The registration page was originally created in Visual Studio 2003, and has a lot of IE-specific crud representative of the browser situation of 2003. (IE5 references? really? and is the aforementioned viewstate big enough?)  This causes some grief with other browsers like Firefox, Chrome, and sometimes IE8 or 9.  And don't get me started on using the page on a Mac or in Safari. My main point is that PASS is an international organization, welcoming everyone from all levels of SQL Server proficiency, and in that spirit I think it would help to accommodate a wider range of browser software, especially since the registration page is extremely simple.  I recognize that this page is not hosted on the PASS website and may be maintained by some division of Microsoft, but to me that's even worse if MS can't update their own pages.  They've deprecated IE6, so they don't need to maintain support on their own websites anymore. OK, I'll shut up now. #sqlpass #24HOP

    Read the article

  • Stuck due to "knowing too much"

    - by Ran Biron
    Note more discussion at http://news.ycombinator.com/item?id=4037794 Welcome Hacker News Visitors! While HN is a fine forum for discussion and debate, Programmers - Stack Exchange is not. From the FAQ: If your motivation for asking the question is “I would like to participate in a discussion about ____”, then you should not be asking here. However, if your motivation is “I would like others to explain ____ to me”, then you are probably OK. (Discussions are of course welcome in our real time web chat.) Currently, this question is viewed by the membership of Programmers.SE as more likely to provoke unproductive discussion than constructive answers; while debates on its form and future are conducted, it will be locked to prevent arguments and vandalism. -- Shog9 I have a relatively simple development task, but every time I try to attack it, I end up spiraling in deep thoughts - how could it extending the future, what are the 2nd generation clients going to need, how does it affect "non functional" aspects (e.g. Performance, authorization...), how would it best be architectured to allow change... I remember myself a while ago, younger and, perhaps, more eager. The "me" I was then wouldn't have given a thought about all that - he would've gone ahead and wrote something, then rewrote it, then rewrote it again (and again...). The "me" today is more hesitant, more careful. I find it much easier today to sit and plan and instruct other people on how to do things than to actually go ahead and do them myself - not because I don't like to code - the opposite, I love to! - but because every time I sit at the keyboard, I end up in that same annoying place. Is this wrong? Is this a natural evolution, or did I drive myself into a rut? Fair disclosure - in the past I was a developer, today my job title is a "system architect". Good luck figuring what it means - but that's the title. Wow. I honestly didn't expect this question to generate that many responses. I'll try to sum it up. Reasons: Analysis paralysis / Over engineering / gold plating / (any other "too much thinking up-front can hurt you"). Too much experience for the given task. Not focusing on what's important. Not enough experience (and realizing that). Solutions (not matched to reasons): Testing first. Start coding (+ for fun) One to throw away (+ one API to throw away). Set time constraints. Strip away the fluff, stay with the stuff. Make flexible code (kinda opposite to "one to throw away", no?). Thanks to everyone - I think the major benefit here was to realize that I'm not alone in this experience. I have, actually, already started coding and some of the too-big things have fallen off, naturally. Since this question is closed, I'll accept the answer with most votes as of today. When/if it changes - I'll try to follow.

    Read the article

  • Big Data – Basics of Big Data Analytics – Day 18 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the various components in Big Data Story. In this article we will understand what are the various analytics tasks we try to achieve with the Big Data and the list of the important tools in Big Data Story. When you have plenty of the data around you what is the first thing which comes to your mind? “What do all these data means?” Exactly – the same thought comes to my mind as well. I always wanted to know what all the data means and what meaningful information I can receive out of it. Most of the Big Data projects are built to retrieve various intelligence all this data contains within it. Let us take example of Facebook. When I look at my friends list of Facebook, I always want to ask many questions such as - On which date my maximum friends have a birthday? What is the most favorite film of my most of the friends so I can talk about it and engage them? What is the most liked placed to travel my friends? Which is the most disliked cousin for my friends in India and USA so when they travel, I do not take them there. There are many more questions I can think of. This illustrates that how important it is to have analysis of Big Data. Here are few of the kind of analysis listed which you can use with Big Data. Slicing and Dicing: This means breaking down your data into smaller set and understanding them one set at a time. This also helps to present various information in a variety of different user digestible ways. For example if you have data related to movies, you can use different slide and dice data in various formats like actors, movie length etc. Real Time Monitoring: This is very crucial in social media when there are any events happening and you wanted to measure the impact at the time when the event is happening. For example, if you are using twitter when there is a football match, you can watch what fans are talking about football match on twitter when the event is happening. Anomaly Predication and Modeling: If the business is running normal it is alright but if there are signs of trouble, everyone wants to know them early on the hand. Big Data analysis of various patterns can be very much helpful to predict future. Though it may not be always accurate but certain hints and signals can be very helpful. For example, lots of data can help conclude that if there is lots of rain it can increase the sell of umbrella. Text and Unstructured Data Analysis: unstructured data are now getting norm in the new world and they are a big part of the Big Data revolution. It is very important that we Extract, Transform and Load the unstructured data and make meaningful data out of it. For example, analysis of lots of images, one can predict that people like to use certain colors in certain months in their cloths. Big Data Analytics Solutions There are many different Big Data Analystics Solutions out in the market. It is impossible to list all of them so I will list a few of them over here. Tableau – This has to be one of the most popular visualization tools out in the big data market. SAS – A high performance analytics and infrastructure company IBM and Oracle – They have a range of tools for Big Data Analysis Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Data Scientist. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Agile Manifesto, Revisited

    - by GeekAgilistMercenary
    Again, conversations give me a zillion things to write about.  The recent conversation that has cropped up again is my various viewpoints of the Agile Manifesto.  Not all the processes that came after the manifesto was written, but just the core manifesto itself.  Just for context, here is the manifesto in all the glory. We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. Several of the key signatories at the time went on to write some of the core books that really gave Agile Software Development traction.  If you check out the Agile Manifesto Site and do a search for any of those people, you will find a treasure trove of software development information. My 2 Cents First off, I agree with a few people out there.  Agile is not Scrum for instance.  Do NOT get these things confused when checking out Agile, or pushing forward with Scrum.  As David Starr points out in his blog entry, "About 35 minutes into this discussion, I realized I hadn?t heard a question or comment that wasn?t related to Scrum. I asked the room, ?How many people are on an agile team that is NOT using Scrum?? 5 hands. Seriously, out of about 150 people of so. 5 hands." So know, as this is one of my biggest pet peves these days, that Scrum is not Agile.  Another quote David writes, "I assure you, dear reader, 2 week time boxes does not an agile team make." This is the exact problem.  Take a look at the actual manifesto above.  First ideal, "Individuals and interactions over processes and tools".  There are a couple of meanings in this ideal, just as there are in the other written ideals.  But this one has a lot of contention with a set practice such as Scrum.  There are other formulas, namely XP (eXtreme) and Kanban are two that come to mind often.  But none of these are Agile, but instead a process based on the ideals of Agile. Some of you may be thinking, "that?s the same thing".  Well, no, it is not.  This type of differentiation is vitally important.  Agile is a set of ideals.  Processes are nice, but they can change, they may work for some and not others.  The Agile Manifesto covers the ideals behind what is intended, that intention being to learn and find new ways to build better software. Ideals, not processes.  Definition versus implementation.  Class versus object.  The ideals are of utmost importance, the processes are secondary, the first ideal is what really lays this out for me "Individuals and interactions over processes and tools".  Yes, we need tools but we need the individuals and their interactions more. For those coming into a development team, I hope you take this to mind.  It is of utmost importance that this differentiation is known and fought for.  The second the process becomes more important than the individuals and interactions, the team will effectively lose the advantages of Agile Ideals. This is just one of my first thoughts on the topic of Agile.  I will be writing more in the near future about each of the ideals.  I will make a point to outline more of my thoughts, my opinions, and experience with the ideals of Agile and the various processes that are out there.  Maybe, I may stumble upon something new with the help of my readers?  It would be a grand overture to the ideals I hold. For the original entry, check out my personal blog with other juicy tech tidbits, rants, raves, and the like. Agilist Mercenary

    Read the article

  • Oracle's PeopleSoft Customer Advisory Boards Convene to Discuss Roadmap at Pleasanton Campus

    - by john.webb(at)oracle.com
    Last week we hosted all of the PeopleSoft CABs (Customer Advisory Boards) at our Pleasanton Development Center to review our detailed designs for future Feature Packs, PeopleSoft 9.2, and beyond. Over 150 customers from 79 companies attended representing a variety of industries, geographies, and company sizes. The PeopleSoft team relies heavily on this group to provide key input on our roadmap for applications as well as technology direction. A good product strategy is one part well thought out idea with many handfuls of customer validation, and very often our best ideas originate from these customer discussions. While the individual CABs have frequent interactions with our teams, it's always great to have all of them in one place and in person. Our attendance was up from last year which I attribute to two things: (1) More interest as a result of PeopleSoft 9.1 upgrade; (2) An improving economy allowing for more travel. Maybe we should index the second item meeting-to-meeting and use it as a market indicator - we'll see! We kicked off the day one session with an overview of the PeopleSoft Roadmap and I outlined our strategy around Feature Packs and PeopleSoft 9.2. Given the high adoption rate of PeopleSoft 9.1 (over 4x that of 9.0 given the same time lapse since the release date), there was a lot of interest around the 9.1 Feature Packs as a vehicle for continuous value. We provided examples of our 3 central design themes: Simplicity, Productivity, and lower TCO, including those already delivered via Feature Packs in 2010. A great example of this is the Company Directory feature in PeopleSoft HCM. The configuration capabilities and the new actionable links our CAB advised us on last Spring were made available to all customers late last year. We reviewed many more future Navigation changes that will fundamentally change the way users interact with PeopleSoft. Our old friend, the menu tree, is being relegated from center stage to a bit part, with new concepts like Activity Guides, Train Stops, Related Actions, Work Centers, Collaborative Workspaces, and Secure Enterprise Search bringing users what they need in a contextual, role based manner with fewer clicks. Paco Aubrejuan, our PeopleSoft GM, and Steve Miranda, the SVP for Fusion Applications, then discussed our plans around Oracle's Application Investment Strategy.  This included our continued investment in developing both PeopleSoft and Fusion as well as the co-existence strategy with new Fusion Apps integrating to PeopleSoft Apps. Should you want to view this presentation, a recording is available. Jeff Robbins, our lead PeopleTools Strategist, provided the roadmap for PeopleTools and discussed our continuing plan to deliver annual releases to further evolve the user experience. Numerous examples were highlighted with the Navigation techniques I mentioned previously. Jeff also provided a lot of food for thought around Lifecycle Management topics and how to remain current on releases with a  lower cost of ownership. Dennis Mesler, from Boise, was the guest speaker in this slot, who spoke about the new PeopleSoft Test Framework (PTF). Regression Testing is a key cost component when product updates are applied. This new tool (which is free to all PeopleSoft customers as part of PeopleTools 8.51) provides a meta data driven approach to recording and executing test scripts. Coupled with what our Usage Monitor enables, PTF provides our customers a powerful tool to lower costs and manage product updates more efficiently and at the time of their choosing. Beyond the general session, we broke out into the individual CABs: HCM, Financials, ESA/ALM, SRM, SCM, CRM, and PeopleTools/ Technology. A day and half of very engaging discussions around our plans took place for each product pillar. More about that to follow in future posts.      We capped the first day with a reception sponsored by our partners: InfoSys, SmartERP (represented by Doris Wong), and Grey Sparling  Solutions (represented by Chris Heller and Larry Grey). Great to see these old friends actively engaged in the very busy PeopleSoft ecosystem!   Jeff Robbins previews the roadmap for PeopleTools with the PeopleSoft CAB  

    Read the article

  • My Experience at Oracle !!! By Ayush Gupta

    - by Nadiya
    Normal 0 false false false EN-US X-NONE X-NONE Hi! My name is Ayush, a Gratuate from BITS Pilani and now working and living in Bangalore. I joined Oracle in August 2013 as a Senior Consultant (SC) and would like to share my experiences over the first couple of months with you.It has been a wonderful journey so far. The last two months have been very exciting for me. First of all I would like to mention that the training program at Oracle that we went through really prepared us well. It matured us and allowed us to go from developing small applications in college to big enterprise products. Two months of initial training has had a lasting impact for me. I am also really enjoying the knowledge I have gained so far and also learning new things in the form of product training. It's really fun to work here. We are treated like adults and we are responsible for our own workloads.With that I can't keep from mentioning the fun times we as a team have in the form of Young Leadership programme in Hotel Fortune Trinity which included Luxurious buffet lunch too. Wishing it could happen more frequently.  Oracle provides one of the best opportunities to learn various technologies across different platforms. What I like best about working at Oracle is the work life balance. With the option of flexible timings, one can easily enjoy planned evenings with friends or maybe working out at the fitness centre in your building. Be it the birthday celebrations at office or the day long team outing at a resort, It’s all together a different experience. Overall, you get to take full ownership of your project and they give you a free leash on how you design your enhancements/changes.As one of the largest international companies, Oracle is obviously an expert on exploring the potential and possibility of inexperienced new hires. We were taught how to make an outstanding team work in a group training session at the first few weeks. From this experience I realized that perfect cooperation is not about where you come from or what your study background is, everyone can find his or her own role to support the team. Even though I am not that skilled in technology, my background has significantly helped me in learning new technologies in Oracle.My idea and suggestion is: for new joiners, the will to learn is be more important than what you have learnt before. Colleagues here at Oracle are professionals in their field, always friendly and glad to help. So don’t worry, all you need to do is just be confident, and have a nice attitude, Oracle will let you fully display your talent. Come and join us, here you can always find a tailor made role for you! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • HTML5 Boilerplate template for ASP.NET with Visual Studio 2010

    - by Harish Ranganathan
    This is the 5th post in the series of HTML5 for ASP.NET Developers  Support for HTML5 in Visual Studio 2010 has been quite good with Visual Studio Service Pack 1 However, HTML5 Boilerplate template has been one of the most popular HTML5 templates out in the internet.  Now, there is one for your favorite ASP.NET Webforms as well as ASP.NET MVC 3 Projects (even for ASP.NET MVC 2).  And its available in the most optimal place, i.e. NuGet. Lets see it in action.  Let us fire up Visual Studio 2010 and create a “File – New Project – ASP.NET Web Application” and leave the default name to create the project.  The default project template creates Site.Master, Default.aspx and the Account (membership) files. When you run the project without any changes, it shows up the default Master Page with the Home and About placeholder pages. Also, just to check the rendering on devices, lets try running the same page in Windows Phone 7 Emulator.  You can download the SDK from here Clearly, it looks bad on the emulator and if we were to publish the application as is, its going to be the same experience when users browse this app. Close the browser and then switch to Visual Studio.   Right click on the project and select “Manage NuGet Packages” The NuGet Package Manager dialog opens up.  Search for HTML5 Boilerplate.  The options for MVC & Web Forms show up.  Click on Install corresponding to the “Add HTML5 Boilerplate to Web Forms” options. It installs the template in a few seconds.   Once installed, you will be able to see a lot of additional Script files and also the all important HTML5Boilerplate.Master file.  This would be the replacement for the default Site.Master.  We need to change the Content Pages (Default.aspx & other pages) to point to this Master Page.  Example <%@ <% Master Language="C#" AutoEventWireup="true" CodeBehind="Html5Boilerplate.Master.cs" Inherits="WebApplication14.SiteMaster" %> would be the setting in the Default.aspx Page. You can do a Find & Replace for Site.Master to HTML5Boilerplate.Master for the whole solution so that it is changed in all the locations. With this, we have our Webforms application ready with HTML5 capabilities.  Needless to say, we need to wire up HTML5 mark up level code, canvas, etc., further to use the actual HTML5 features, but even without that, the page is now HTML5’ed.   One of the advantages of HTML5 (here HTML5 is collectively referred for CSS3, Javascript enhancements etc.,)  is the ability to render the pages better on mobile and hand held devices. So, now when we run the page from Visual Studio, the following is what we get.  Notice the site.icon automatically added.  The page otherwise looks similar to what it was earlier. Now, when we also check this page on the Windows Phone Emulator, here below is what, we get. As you can see, we definitely get a better experience now.  Of course, this is not the only HTML5 feature that we can use.  We need to wire up additional code for using Canvas, SVG and other HTML5 features.  But, definitely, this is a good starting point. You can also install the HTML5boilerplate Template for your ASP.NET MVC 3 and ASP.NET MVC 2 from the NuGet packages and get them ready for HTML5. Cheers !!!

    Read the article

  • Part 9: EBS Customizations, how to track

    - by volker.eckardt(at)oracle.com
    In the previous blogs we were concentrating on the preparation tasks. We have defined standards, we know about the tools and techniques we will start with. Additionally, we have defined the modification strategy, and how to handle such topics best. Now we are ready to take the requirements! Such requirements coming over in spreadsheets, word files (like GAP documents), or in any other format. As we have to assign some attributes, we start numbering all that and assign a short name to each of these requirements (=CEMLI reference). We may also have already a Functional person assigned, and we might involve someone from the tech team to estimate, and we like to assign a status such as 'planned', 'estimated' etc. All these data are usually kept in spreadsheets, but I would put them into a database (yes, I am from Oracle :). If you don't have any good looking and centralized application already, please give a try with Oracle APEX. It should be up and running in a day and the imported sheets are than manageable concurrently!  For one of my clients I have created this CEMLI-DB; in between enriched with a lot of additional functionality, but initially it was just a simple centralized CEMLI tracking application. Why I am pointing out again the centralized method to manage such data? Well, your data quality will dramatically increase, if you let your project members see (also review and update) "your" data.  APEX allows you to filter, sort, print, and also export. And if you can spend some time to define proper value lists, everyone will gain from. APEX allows you to work in 'agile' mode, means you can improve your application step by step. Let's say you like to reference a document, or even upload the same, you can do that. Or, you need to classify the CEMLIs by release, just add this release field, same for business area or CEMLI type. One CEMLI record may then look like this: Prepare one or two (online) reports, to be ready to present your "workload" to the project management. Use such extracts also when you work offline (to prioritize etc.). But as soon as you are again connected, feed the data back into the central application. Note: I have combined this application with an additional issue tracker.  Here the most important element is the CEMLI reference, which acts as link to any other application (if you are not using APEX also as issue tracker :).  Please spend a minute to define such a reference (see blog Part 8: How to name Customizations).   Summary: Building the bridge from Gap analyse to the development has to be done in a controlled way. Usually the information is provided differently, but it is suggested to collect all requirements centrally. Oracle APEX is a great solution to enter and maintain such information in a structured, but flexible way. APEX helped me a lot to work with distributed development teams during the complete development cycle.

    Read the article

  • AdventureWorks2012 now available for all on SQL Azure

    - by jamiet
    Three days ago I tweeted this: Idea. MSFT could host read-only copies of all the [AdventureWorks] DBs up on #sqlazure for the SQL community to use. RT if agree #sqlfamily — Jamie Thomson (@jamiet) March 24, 2012 Evidently I wasn't the only one that thought this was a good idea because as you can see from the screenshot that tweet has, so far, been retweeted more than fifty times. Clearly there is a desire to see the AdventureWorks databases made available for the community to noodle around on so I am pleased to announce that as of today you can do just that - [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use* for their own means. *By use I mean "issue some SELECT statements". You don't have permission to issue INSERTs, UPDATEs, DELETEs or EXECUTEs I'm afraid - if you want to do that then you can get the bits and host it yourself. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected]: Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more that we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. OK, with the prickly subject of begging for cash out of the way let me share the details that you need to connect to [AdventureWorks2012] on SQL Azure: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly That user sqlfamily has all the permissions required to enable you to query away to your heart's content. Here is the code that I used to set it up: CREATE USER sqlfamily FOR LOGIN sqlfamily;CREATE ROLE sqlfamilyrole;EXEC sp_addrolemember 'sqlfamilyrole','sqlfamily';GRANT VIEW DEFINITION ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT VIEW DATABASE STATE ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT SHOWPLAN TO sqlfamilyrole;EXEC sp_addrolemember 'db_datareader','sqlfamilyrole'; You can connect to the database using SQL Server Management Studio (instructions to do that are provided at Walkthrough: Connecting to SQL Azure via the SSMS) or you can use the web interface at https://mhknbn2kdz.database.windows.net: Lastly, just for a bit of fun I created a table up there called [dbo].[SqlFamily] into which you can leave a small calling card. Simply execute the following SQL statement (changing the values of course): INSERT [dbo].[SqlFamily]([Name],[Message],[TwitterHandle],[BlogURI])VALUES ('Your name here','Some Message','your twitter handle (optional)','Blog URI (optional)'); [Id] is an IDENTITY field and there is a default constraint on [DT] hence there is no need to supply a value for those. Note that you only have INSERT permissions, not UPDATE or DELETE so make sure you get it right first time! Any offensive or distasteful remarks will of course be deleted :) Thank you for reading this far and have fun using AdventureWorks on Azure. I hope it proves to be useful for some of you. @jamiet AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • Visual Studio Little Wonders: Quick Launch / Quick Access

    - by James Michael Hare
    Once again, in this series of posts I look at features of Visual Studio that may seem trivial, but can help improve your efficiency as a developer. The index of all my past little wonders posts can be found here. Well, my friends, this post will be a bit short because I’m in the middle of a bit of a move at the moment.  But, that said, I didn’t want to let the blog go completely silent this week, so I decided to add another Little Wonder to the list for the Visual Studio IDE. How often have you wanted to change an option or execute a command in Visual Studio, but can’t remember where the darn thing is in the menu, settings, etc.?  If so, Quick Launch in VS2012 (or Quick Access in VS2010 with the Productivity Power Tools extension) is just for you! Quick Launch / Quick Access – find a command or option quickly For those of you using Visual Studio 2012, Quick Launch is built right into the IDE at the top of the title bar, near the minimize, maximize, and close buttons: But do not despair if you are using Visual Studio 2010, you can get Quick Access from the Productivity Power Tools extension.  To do this, you can go to the extension manager: And then go to the gallery and search for Productivity Power Tools and install it.  If you don’t have VS2012 yet, then the Productivity Power Tools is the next best thing.  This extension updates VS2010 with features such as Quick Access, the Solution Navigator, searchable Add Reference Dialog, better tab wells, etc.  I highly recommend it! But back to the topic at hand!  In VS2012 Quick Launch is built into the IDE and can be accessed by clicking in the Quick Launch area of the title bar, or by pressing CTRL+Q.  If you have VS2010 with the PPT installed, though, it is called Quick Access and is accessible through View –> Quick Access: Regardless of which IDE you are using, the feature behaves mostly the same.  It allows you to search all of Visual Studio’s commands and options for a particular topic.  For example, let’s say you want to change from tabs to tabs expanded to spaces, but don’t remember where that option is buried.  You can bring up Quick Launch / Quick Access and type in “tabs”: And it brings up a list of all options on tabs, you can then choose the one appropriate to you and click on it and it will take you right there! A lot easier than diving through the options tree to find what you are looking for!  It also works on menu commands, for example if you can’t remember how to open the Output window: It shows you the menu items that will get you to the Output window, and (if applicable) the keyboard shortcuts.  Again, clicking on one of these will perform the action for you as well. There are also some tasks you can perform directly from Quick Launch / Quick Access.  For example, perhaps you are one of those people who like to have the line numbers in your editor (I do), so let’s bring up Quick Launch / Quick Access and type “line numbers”: And let’s select Turn Line Numbers On, and now our editor looks like: And Voila!  We have line numbers in VS2010.  You can do this in VS2012 too, but it takes you to the option settings instead of directly turning them off and on.  There are bound to be differences between the way the two editors organize settings and commands, but you get the point. So, as you can see, the Quick Launch / Quick Access feature in Visual Studio makes it easy to jump right to the options, commands, or tasks you are interested in without all the digging. Summary An IDE as powerful as Visual Studio has so many options and commands that it can be confusing to remember how to find and invoke them.  Quick Launch (Quick Access in VS2010 with Productivity Power Tools extension) is a quick and handy way to jump to any of these options, commands, or tasks quickly without having to remember in what menu or screen they are buried!  Technorati Tags: C#,CSharp,.NET,Little Wonders,Visual Studio,Quick Access,Quick Launch

    Read the article

  • How do you educate your teammates without seeming condescending or superior?

    - by Dan Tao
    I work with three other guys; I'll call them Adam, Brian, and Chris. Adam and Brian are bright guys. Give them a problem; they will figure out a way to solve it. When it comes to OOP, though, they know very little about it and aren't particularly interested in learning. Pure procedural code is their MO. Chris, on the other hand, is an OOP guy all the way -- and a cocky, condescending one at that. He is constantly criticizing the work Adam and Brian do and talking to me as if I must share his disdain for the two of them. When I say that Adam and Brian aren't interested in learning about OOP, I suspect Chris is the primary reason. This hasn't bothered me too much for the most part, but there have been times when, looking at some code Adam or Brian wrote, it has pained me to think about how a problem could have been solved so simply using inheritance or some other OOP concept instead of the unmaintainable mess of 1,000 lines of code that ended up being written instead. And now that the company is starting a rather ambitious new project, with Adam assigned to the task of getting the core functionality in place, I fear the result. Really, I just want to help these guys out. But I know that if I come across as just another holier-than-thou developer like Chris, it's going to be massively counterproductive. I've considered: Team code reviews -- everybody reviews everybody's code. This way no one person is really in a position to look down on anyone else; besides, I know I could learn plenty from the other members on the team as well. But this would be time-consuming, and with such a small team, I have trouble picturing it gaining much traction as a team practice. Periodic e-mails to the team -- this would entail me sending out an e-mail every now and then discussing some concept that, based on my observation, at least one team member would benefit from learning about. The downside to this approach is I do think it could easily make me come across as a self-appointed expert. Keeping a blog -- I already do this, actually; but so far my blog has been more about esoteric little programming tidbits than straightforward practical advice. And anyway, I suspect it would get old pretty fast if I were constantly telling my coworkers, "Hey guys, remember to check out my new blog post!" This question doesn't need to be specifically about OOP or any particular programming paradigm or technology. I just want to know: how have you found success in teaching new concepts to your coworkers without seeming like a condescending know-it-all? It's pretty clear to me there isn't going to be a sure-fire answer, but any helpful advice (including methods that have worked as well as those that have proved ineffective or even backfired) would be greatly appreciated. UPDATE: I am not the Team Lead on this team. Chris is. UPDATE 2: Made community wiki to accord with the general sentiment of the community (fancy that).

    Read the article

  • Recover Lost Form Data in Firefox

    - by Asian Angel
    Have you ever filled in a text area or form in a webpage and something happens before you can finish it? If you like the idea of recovering that lost data then you will want to have a look at the Lazarus: Form Recovery extension for Firefox. Lazarus: Form Recovery in Action For our first example we chose the comment text box area for one of the articles here at the website. As you can see we were not finished typing in the whole comment yet… Notice the “Lazarus Icon” in the lower right corner. Note: We simulated accidental tab closures for our two examples. After getting our webpage opened up again all of our text was gone. Right clicking within the text area showed two options available…”Recover Text & Recover Form”. Notice that our lost text was listed as a “sub menu”…this could be extremely useful in matching up the appropriate text to the correct webpage if you had multiple tabs open before something happened. Click on the correct text listing to insert it. So easy to finish writing our comment without having to start from zero again. In our second example we chose the sign-up form page for the website. As before we were not finished filling in the form… Getting the webpage opened back up showed the same problem as before…all the entered text was lost. This time we right clicked in the browser window area and there was that wonderful “Recover Form Command” waiting to be used. One click and… All of our lost form data was back and we were able to finish filling in the form. For those who may be interested you can disable Lazarus: Form Recovery on individual websites using the “Context Menu” for the “Status Bar Icon” Options There are three sections in the options and you should take a quick look through them to make any desired modifications in how Lazarus: Form Recovery functions. The first “Options Area” focuses on display/access for the extension. The second “Options Area” allows you to expand the type of data retained, enable removal of data within a given time frame, set up a password, disable search indexing, and enable form data retention while in “Private Browsing Mode”. The third “Options Area” focuses on the Lazarus database itself. Conclusion If you have ever lost text area or form data before then you know how much time could be lost in starting over. Lazarus: Form Recovery helps provide a nice backup solution to get you up and running once again with a minimum of effort. Links Download the Lazarus: Form Recovery extension (Mozilla Add-ons) Download the Lazarus: Form Recovery extension (Extension Homepage) Similar Articles Productive Geek Tips Quick Tip: Resize Any Textbox or Textarea in FirefoxWhy Doesn’t AutoComplete Always Work in Firefox?Pass Variables between Windows Forms Windows without ShowDialog()Using Secure Login in FirefoxAdd Search Forms to the Firefox Search Bar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • T-SQL Tuesday #005 : SSRS Parameters and MDX Data Sets

    - by blakmk
    Well it this weeks  T-SQL Tuesday #005  topic seems quite fitting. Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.   @Blakmk

    Read the article

  • Organizational characteristics that impact the selection of Development Methodology concepts applied to a project

    Based on my experience, no one really follows a specific methodology exactly as it is formally designed. In fact, the key concepts of a few methodologies are usually combined to form a hybrid methodology for each project based on the current organizational makeup and the project need/requirements to be accomplished. Organizational characteristics that impact the selection of methodology concepts applied to a project. Prior subject knowledge pertaining to a project can be critical when deciding on what methodology or combination of methodologies to apply to a project. For example, if a project is very straight forward, and the development staff has experience in developing  that are similar, then the waterfall method could possibly be the best choice because little to no research is needed  in order to complete the project tasks and there is very little need for changes to occur.  On the other hand, if the development staff has limited subject knowledge or the requirements/specification of the project could possibly change as the project progresses then the use of spiral, iterative, incremental, agile, or any combination would be preferred. The previous methodologies used by an organization typically do not change much from project to project unless the needs of a project dictate differently. For example, if the waterfall method is the preferred development methodology then most projects will be developed by the waterfall method. Depending on the time allotted to a project each day can impact the selection of a development methodology. In one example, if the staff can only devote a few hours a day to a project then the incremental methodology might be ideal because modules can be added to the final project as they are developed. On the other hand, if daily time allocation is not an issue, then a multitude of methodologies could work well for a project. Project characteristics that impact the selection of methodology concepts applied to a project. The type of project being developed can often dictate the type of methodology used for the project. Based on my experience, projects that tend to have a lot of user interaction, follow a more iterative, incremental, or agile approach typically using a prototype that develops into a final project. These methodologies desire back and forth communication between users, clients, and developers to allow for requirements to change and functionality to be enhanced. Conversely, limited interaction applications or automated services can still sometimes get away with using the waterfall or transactional approach. The timeline of a project can also force an organization to prefer a particular methodology over the rest. For instance, if the project must be completed within 24 hours, then there is very little time for discussions back and forth between clients, users and the development team. In this scenario, the waterfall method would be perfect because the only interaction with the client occurs prior to a development project to outline the system requirements, and the development team can quickly move through the software development stages in order to complete the project within the deadline. If the team had more time, then the other methodologies could also be considered because there is more time for client and users to review the project and make changes as they see fit, and/or allow for more time to review the project in order to enhance the business performance and functionality. Sometimes the client and or user involvement can dictate the selection of methodologies applied to a project. One example of this is if a client is highly motivated to get a project completed and desires to play an active part in the development process then the agile development approach would work perfectly with this client because it allows for frequent interaction between clients, users and the development team. The inverse of this situation is a client that just wants to provide the project requirements and only wants to get involved when the project is to be delivered. In this case the waterfall method would work well because there is no room for changes and no back and forth between the users, clients or the development team.

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >