Search Results

Search found 9371 results on 375 pages for 'existing'.

Page 198/375 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Microsoft Lowers Cloud Barrier To Entry

    - by Herve Roggero
    Once in a while, the technology stack changes enough to create a disturbance in the IT industry. Microsoft did just that today and has officially closed the gap with its #1 competitor: Amazon. What is remarkable is that Microsoft is no longer an alternative to Amazon, it is becoming a clear leader in that space. Some of the new features include official support for durable Virtual Machines with high availability (cross-geographic replication), free WebSites to try Azure, MySQL database at no charge, a new distributed low-latency cache feature, Linux support, support with existing VPN hardware for seamless on-premise integration, a new partner ecosystem and much, much more. Amazon had an edge against Windows Azure in the IaaS (Infrastructure as a Service) space, until now. With the latest release from Microsoft Azure, the gap has been filled. In fact, it seems Amazon may now have a gap to fill… This is great news to everyone; it seems that cloud offerings are becoming more standardized with the more mature cloud providers, and the management stack and quality of service of each cloud provider is increasingly becoming the differentiator. With today’s announcements, it is becoming clear that cloud providers are pushing hard to increase their service footprint and lowering typical barriers to entry such as support for open-source operating systems, free trial offers, higher availability, faster deployment times and simpler enterprise integration.

    Read the article

  • StreamInsight V2.0 Released!

    - by Roman Schindlauer
    The StreamInsight Team is proud to announce the release of StreamInsight V2.0! This is the version that ships with SQL 2012, and as such it has been available through Connect to SQL CTP customers already since December. As part of the SQL 2012 launch activities, we are now making V2.0 available to everyone, following our tradition of providing a separate download page. StreamInsight V2.0 includes a number of stability and performance fixes over its predecessor V1.2. Moreover it introduces a dependency on the .NET Framework 4.0, as well as on SQL 2012 license keys. For these reasons, we decided to bump the major version number, even though V2.0 does not add new features or API surface. It can be regarded a stepping stone to the upcoming release 2.1 which will contain significantly new APIs (that will depend on .NET 4.0). Head over here to download StreamInsight V2.0. The updated Books Online can be found here. Update: For instructions on how to make your existing application work against the new bits without recompilation, see here. Regards, The StreamInsight Team

    Read the article

  • Quantifying the Value Derived from Your PeopleSoft Implementation

    - by Mark Rosenberg
    As product strategists, we often receive the question, "What's the value of implementing your PeopleSoft software?" Prospective customers and existing customers alike are compelled to justify the cost of new tools, business process changes, and the business impact associated with adopting the new tools. In response to this question, we have been working with many of our customers and implementation partners during the past year to obtain metrics that demonstrate the value obtained from an investment in PeopleSoft applications. The great news is that as a result of our quest to identify value achieved, many of our customers began to monitor their businesses differently and more aggressively than in the past, and a number of them informed us that they have some great achievements to share. For this month, I'll start by pointing out that we have collaborated with one of our implementation partners, Huron Consulting Group, Inc., to articulate the levers for extracting value from implementing the PeopleSoft Grants solution. Typically, education and research institutions, healthcare organizations, and non-profit organizations are the types of enterprises that seek to facilitate and automate research administration business processes with the PeopleSoft Grants solution. If you are interested in understanding the ways in which you can look for value from an implementation, please consider registering for the webcast scheduled for Friday, December 14th at 1pm Central Time in which you'll get to see and hear from our team, Huron Consulting, and one of our leading customers. In the months ahead, we'll plan to post more information about the value customers have measured and reported to us from their implementations and upgrades. If you have a great story about return on investment and want to share it, please contact either [email protected]  or [email protected]. We'd love to hear from you.

    Read the article

  • Should one use a separate database for application data and user data?

    - by trycatch
    I’ve been working on a project for a little while and I’m unsure which is the better architecture. I’m interested in the consensus. The answer to me seems fairly obvious but something about it is digging at me and I can't pick out what. The TL;DR is: how do you handle a program with application data and user data in the same DB which needs to be able to receive updates to the application data periodically? One database for user data and one for application, or both in one? The detailed version is.. if an application has a database which needs to maintain application data AND user data, and the user data all references application data, it feels more natural to me to store them in the same database. But if there exists a need to be able to update the application data within this database periodically, should this be stripped into two databases so that one can simply download the updated application data database file as an update and replace the old one? Or should they remain as one database, and the application data be updated via a script which inserts the new data into the existing database? The second sounds clearly preferable to me... but for some reason just doesn’t feel right, and I can't pick out quite why.

    Read the article

  • Can the "Documents" standard folder be rescued and how?

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary?

    Read the article

  • Dualboot (Win 8 / Ubuntu 13) is stuck at 'switching to clocksource'

    - by Daniel Puscht
    for days I have been crawling the web for solutions to my problem, but couln't find any. Here it is: I got a new Laptop (ASUS Vivobook S200E) with Win 8 OEM preinstalled. I wanted to create a dual-boot system with Ubuntu 13 next to it. I read about UEFI and that I have to turn of Secure Boot and use the existing EFI partition as bootloader for Ubuntu. So I did. I also ran boot-repair reinstalling the GRUB. The result is when I start the computer I get into the boot menu. So far, so good. When I pick Win everthing is fine. But when I choose Ubuntu (recovery) the system starts, but gets stuck at the line '[1.806366] Switching to clocksource tsc'. I already tried other versions of Ubuntu (12.04.2, 12.10). I played with boot-repair (using the recommended fix, setting everything manually). But nothing works. It's always the same issue. I read that it could be a problem concerning graphic drivers, but this I can hardly believe. If this is any help, boot-repair gave me this link to post in fora. http://paste.ubuntu.com/5810391/ Thanks for any help in advance

    Read the article

  • How do you plan your asynchronous code?

    - by NullOrEmpty
    I created a library that is a invoker for a web service somewhere else. The library exposes asynchronous methods, since web service calls are a good candidate for that matter. At the beginning everything was just fine, I had methods with easy to understand operations in a CRUD fashion, since the library is a kind of repository. But then business logic started to become complex, and some of the procedures involves the chaining of many of these asynchronous operations, sometimes with different paths depending on the result value, etc.. etc.. Suddenly, everything is very messy, to stop the execution in a break point it is not very helpful, to find out what is going on or where in the process timeline have you stopped become a pain... Development becomes less quick, less agile, and to catch those bugs that happens once in a 1000 times becomes a hell. From the technical point, a repository that exposes asynchronous methods looked like a good idea, because some persistence layers could have delays, and you can use the async approach to do the most of your hardware. But from the functional point of view, things became very complex, and considering those procedures where a dozen of different calls were needed... I don't know the real value of the improvement. After read about TPL for a while, it looked like a good idea for managing tasks, but in the moment you have to combine them and start to reuse existing functionality, things become very messy. I have had a good experience using it for very concrete scenarios, but bad experience using them broadly. How do you work asynchronously? Do you use it always? Or just for long running processes? Thanks.

    Read the article

  • Trade In, Trade Up Promotion: SPARC Consolidation Now Through May 31st

    - by swalker
    Dear Partner, Installed Base Business (IBB) technology refresh is one of the most important activities for Oracle, for you and for your customers. It allows your existing customers to benefit from the most up-to-date, best-of-breed Oracle products. And it’s an exciting time to perform a technology refresh: a new SPARC promotion is available now, closing 31st May 2012. Customers trading in older SPARC systems and upgrading to a new SPARC SuperCluster T4-4 or SPARC Enterprise M8000/M9000 can get $4,000 per CPU. Discount is pre-approved and upfront (maximum discounts apply). The major highlights are as follows: Targeted Systems: Upgrade to SPARC M8000, M9000, SuperCluster Qualified installed base upgrade from: All older-generations of SPARC systemsPromotional offer: Trade-in Value: $4K per CPU Pre-approved maximum discount (including trade-in) not to exceed 60% on M8/9000 systems and 25% on SuperCluster No-cost dock-to-dock shipping, and environmentally safe disposal of the returned hardware through Oracle best-of-class recycling processes. Recommendations: We recommend you to take the following actions: As usual, please register your opportunities in OMM When you do so, please make sure you place the following Campaign Names in the “Marketing Initiative” field of OMM: Campaign Name : EMEA_Tech Refresh-IBB Campaign_12H1_Follow Up_O For all the details: Please view rules, and FAQs. For more information, please visit the Promo Partner Site here. For more information on IBB and the Oracle Upgrade Advantage Program (UAP):http://www.oracle.com/us/products/servers-storage/upgrade-advantage-program/index.html http://www.oracle.com/partners/secure/sales/oracle-ibb-program-for-partners-184291.html Contacts: For questions, please contact your favorite Oracle Partner Account Manager.

    Read the article

  • Beta Period Closed for "Java EE 6 JavaServer Faces Developer Certified Expert Exam" Certification Exam (1Z1-896)

    - by Brandye Barrington
    The beta period is closed for Java EE 6 JavaServer Faces Developer Certified Expert Exam (Exam 1Z1-896), and registration is now open for the production version of the exam. Passing this exam leads to the Oracle Certified Expert, Java EE 6 JavaServer Faces Developer certification. Earning a JavaServer Faces certification can help you deliver lower cost and faster time to market by allowing the experienced Java developer to take the web page from conception to delivery, removing the need for multiple collaboration with web designers and developers. With the range of products built on JSF, developing an expertise through certification on this technology can open the door to a variety of opportunities and give you an edge over your peers. This certification is also a valuable addition to your existing Java EE 5 and EE 6 certifications, increasing your marketable skills and solidifying your credibility. While training is not required for certification, the Java EE 6: Develop Web Applications with JSF course from Oracle University, can expedite you towards your certification. Visit pearsonvue.com/oracle and register for exam 1Z0-896. You can get all preparation details, including exam objectives, number of questions, time allotments, and pricing on the Oracle Certification website. QUICK LINKS: Certification Track: Oracle Certified Expert, Java EE 6 JavaServer Faces Developer Certification Exam: Java EE 6 JavaServer Faces Developer Certified Expert Exam (1Z1-896) Recommended Training: Java EE 6: Develop Web Applications with JSF Certification Website: About Beta Exams Register Now: Pearson VUE

    Read the article

  • How much effort is involved in moving a WordPress site to a private server? [on hold]

    - by Alan
    I work in tech, but am on the business side. I have a WordPress site that I would like to move to a personal server and associate with a new domain name. I already have a server (actually, a friend is letting me use his) and the domain name. A friend-of-a-friend, who claims to be an IT pro, has agreed to help, but now is asking for what feels like a lot of money for what he says is a pretty time-intensive job. This doesn't sound right to me, so I thought I would ask here: Would it take months or even days to move the content, and why would it have to be moved in stages? The blog currently uses a basic template and has about 1000 posts. How much effort is really involved in moving a WordPress site from one server to another? Can anyone explain the process? Would it just make more sense to point the domain name at the existing WordPress blog, and pay the nominal yearly fee? I appreciate any answers you can provide.

    Read the article

  • Engineered Systems and PCI

    - by Joel Weise
    Oracle has a number of different engineered systems.  These are design to be highly integrated, optimized and secure systems.  The Exadata database engineered system and the Exalogic application engineered system are two good examples.  Often I am asked how these comply with different standards and regulations.  Exalogic is the Oracle engineered system that supports applications and the focus of today's blog.  First, we must recognize that as a collection of hardware and software, we cannot simply state that Exalogic is "compliant" with PCI DSS.  This is because Exalogic must be implemented within the context of one's existing IT infrastructure, the security features of that infrastructure, the governance framework that exists, security policies, operational procedures, and other factors.  What we can say though, is that Exalogic has been designed with various security capabilities that can be utilized to support compliance to PCI DSS as well as other standards and regulations (e.g., NIST and HIPAA).  Given that, Exalogic can be an excellant platform for running PCI related payment applications.  Coalfire Systems, a leading QSA in the US, has evaluated Exalogic against PCI DSS and supports this position.  Their evaluation can be found here: Exalogic and PCI Compliance. I hope you find it useful. 

    Read the article

  • Fresh install of 64 bit 12.04 over 32 bit 11.10 alongside Windows 7

    - by Pareen
    I currently have Ubuntu 11.10 32 bit and Windows 7 dual boot in separate partitions. I am trying to do a fresh install of Ubuntu 12.04 64 bit (mistakenly installed the 32 bit 11.10 a little while ago.. I need a 64 bit version to support AOSP build) OVER my the exisiting 11.10 partition. I have referenced How to Install fresh 12.04 install to a PC with dual booting Windows 7 & Ubuntu11.10?, as well as other posts on using the Live CD to do a fresh install. However, the problem I am experiencing is when I bring up the install screen, it says the following: This computer has multiple operating systems on it. What would you like to do. (3 options) Install Ubuntu 12.04 alongside them Replace all with Ubuntu 12.04 (Warning, this will delete files across ALL operating systems) Something else (you can create or resize partitions yourself) This is different from what is in other posts, as mine states that there are "multiple O.Ses" and doesnt individually allow me to replace the Ubuntu 11.10. I don't want to replace ALL O.S.es: I need to preserve Windows 7 and am only trying to replace the old Ubuntu 11.10 partition with the new 12.04 64 bit. I did have Ubuntu installed via Wubi (I believe it was 10.04) prior to putting 11.10 in a separate partition, but I have removed it via Add/Remove programs in Windows. I was wondering how to go about doing this... Should I use the "Something else" option to bring up the partition manager, and just assign my existing 11.10 partition with root mount point + swap space. Will this do the same thing by overwriting with fresh 12.04 install?? I appreciate all your help.

    Read the article

  • Should business services cross bounded contexts?

    - by Paul T Davies
    Firstly, I am following the convention that a bounded context is synonymous to a department, or possibly one department has 1 to many bounded contexts. We have a client consultancy department that has a Documentation Service. Documents are stored in the Document Store Service (which is where all documents in the company are stored - it is a utility service), and the Documentation Service stores information about that document (a business service). As it was designed for the client consultancy, it is information relevant to them. Now health and safety need somewhere to store information about a document. This is different information to client consultancy, but I have been instructed to extend the existing service to account for this extra information. I feel this service is now crossing a bounded context. My worry is that all departments will eventually store there information in here and the service will become bloated, trying to be all things to all departments. Each document record will only store a subset of the information because it will only belong to one department. It will get worse when different departments want to store the same information but refer to it in a diferent ways, or when two departments want to store different information that they refer to in the same way. In my understanding, this is exactly the reason for bounded contexts. I feel each department should have it's own business service for information about a document, but use the same utility service to actually store the document. What would be the correct approach?

    Read the article

  • Is using SVN for development and CM a bad practice?

    - by GatorGuy
    I have a bit of experience with SVN as a pure programmer/developer. Within my company, however, we use SVN as our configuration management tool. I thought using SVN for development at the same time was OK since we could use branches and the trunk for dev, and tags for releases. To me, the tags were the CM part, and the branches/trunk were the dev part. Recently a person, who develops high level code (but outside of the "pure SW" group) mentioned that the existing philosophy (mixing SVN for dev and CM) was wrong... in his opinion. His reasoning is that he thinks the company's CM tool should always link to run-able SW (so branches would break this rule). He also mentioned that a CM tool shouldn't be a backup utility for daily or incremental commits. Finally, he doesn't like the idea of having to jump from revision 143 to 89 in order to get a working copy... and further that CM tools shouldn't allow reversion to a broken state. In general he wants to separate the CM and back-up/dev utilties that SVN offers. Honestly, I am new and the person with this perspective is one of seniority, experience, and success, so I want to field this dilemma with the stackoverflow userbase to see if his approach has merit. My question: Should SVN be purely used for development, and another tool for CM (or vice versa)? Why? If so, what tools would you suggest for this combo? Or do you think that integrating both CM and dev into SVN is the best approach? Why? Thanks.

    Read the article

  • SOA: Simplifying Cloud, Mobile, and On-premise Integration–Webcast October 24th 2013

    - by JuergenKress
    Proliferation of mobile devices, data explosion, and cloud enablement has caused a dramatic shift in IT. Organizations need to rethink their application infrastructures to accommodate increased processing speeds, heightened security and availability concerns for their applications, all while meeting lowered total cost of ownership. Traditional infrastructures may not be sufficient to accommodate the diversity and complexity of integrations in this new era. Many of today’s IT organizations rely on a Service Oriented Architecture (SOA) backbone to keep their businesses running. SOA adoption and acceptance across industries have led to platform maturity at the application layer level. However, we are at the start of an era where there is a new modus operandi for organizations to thrive and deliver continuously on competitive differentiation. This change is a result of market globalization, explosion in the number of mobile devices, unparalleled growth in voluminous data and innovation that crosses organizational boundaries. Social, mobile, cloud are terms that are revolutionizing the way organizations operate. Oracle SOA Suite is a hot-pluggable software suite to build, deploy and manage Service-Oriented Architectures (SOA).Oracle SOA transforms complex application integration into agile and reusable service-based connectivity by mediating, routing, and managing interactions between services and applications in the enterprise and in the cloud. Oracle SOA Suite's hot-pluggable architecture helps businesses lower upfront costs by allowing maximum re-use of existing IT investments and assets. Join us on this webcast to find out how you can optimize the use of Oracle SOA Suite, simplifying integration, and what does the next generation of SOA has to offer to you. Agenda: What's new in Oracle SOA Simplifying integration Application Integration and SOA Cloud integration with SOA Mobile Integration leveraging Oracle SOA Suite Oracle Delivers on Next Generation SOA Customer Examples Summary and Q&A Webcast Thursday October 24th, 2013 10am CET (8am UTC / 11am EEST)Details at the Registration Page SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cloud integration,mobile integration,training,webcast middeware,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Impact of Service Oriented Architecture (SOA) on Business and IT Operations

    The impact of Service Oriented Architecture (SOA) on business and IT operations varies from company to company. I think more and more companies are starting to view SOA as just another technology that they can incorporate in an existing or new system. One of the driving factors in using SOA is the reduction in maintenance costs and decrease in the time needed to bring products to market. The reductions in costs, and reduced turnaround time can be directly converted in to increased profitability due to less expenditures that are needed in order to maintain or create new systems. My personal perspective on SOA is that it is great for what it is actually intended to do. SOA allows systems to be distributed across networks or even the world while ensuring enterprise processing consistency, data integrity and preventing code duplication. This being said a lot of preparation and work goes into properly designing and implementing an SOA especially if an enterprise wants to take full advantage of its benefits. Even though SOA has recently gotten a lot of hype about its benefits it does not a perfect fit for all situations. At the end of the day SOA is just another tool in my tool belt that I can pull from to create solutions that meet the business’s needs. Based on current industry trends SOA appears to be a very solid technology to use moving forward, especially as more and more companies shift towards cloud based computing. It is important to remember that SOA is one of many technologies that can be used in creating business solutions and I think more time will be spent in the future evaluating if SOA is the right technology for a solution once the initial hype of SOA has calmed down.

    Read the article

  • Example: Cross Cutting Concerns of an Application

    A little while ago I was given an opportunity to design and implement a new system that sent data via an HTTP Post method and then processed the results that were returned so that they could be inserted in to a database. My system had eight core concerns that it needed to fulfill. Eight Core Concerns Database Access Data Entities Worker Result Processing Process Flow Manager Email/Notification Error Handling Logging Of these eight, five were actually cross cutting concerns. 5 Cross Cutting Concerns Database Access Data Entities Email/Notification Error Handling Logging These five cross cutting concerns were determined after I created an aspect oriented model to help identity the system components that could be factored out into separate components.  These separated components would then be included in the system so that they could be used by various other components.  These five components allow all of the other components to access the database, store data, send notifications, handle errors, and log all system events.  Thus, these components are used to share unique aspects to the system via their implementation. The use of Aspect oriented architecture greatly helped me define what components I needed to create and what each of those components could do.  It also showed how all of the other aspects depended on each other so that each component did not have to re-implement code that was already created in the existing system.

    Read the article

  • Is it appropriate to run a complex enterprise-system configuration and migration project in a similar way to a Scrum development project?

    - by AndyM
    I'm just starting out on the implementation of a large enterprise-wide system, which has complex requirements and many stakeholders. The company has been through high-level evaluation and tender process and determined to purchase a highly configurable "off-the-shelf" product rather than building an entirely bespoke system. The system will replace several existing systems and will require a significant amount of data migration. I'm thinking that the implementation of this system (which is expected to take over 2 years) could be run in a similar way to a Scrum software development project. With the first sprints targeted at building the minimal possible functionality needed (across all functional areas), and then iteratively deepening the level of functionality according the stakeholder feedback. I think this will de-risk the project and help ensure a balance of stakeholder needs within the available time. The user stories are still the same, it's just that to implement them we have work within the constraints of the pre-purchased system. When it comes to 'building stuff', instead of writing custom code the team will be configuring the off-the-shelf package, writing data conversion scripts and the like (and it should be a lot quicker!). Does this sound like a sensible approach? Does the Agile approach makes sense here?

    Read the article

  • Do you have to recreate workspaces after upgrading a TFS 2008 server to TFS 2010?

    - by Clara Oscura
    I am just reposting this thread from a MSDN forum since it seems to be unavailable. It was very useful when I was having trouble with my folder mappings after migrating to TFS 2010. Question: I opened VS2008 and connected it to the upgraded 2010 TFS server.  Upon clicking any of our Team Projects in source control explorer I get "Team Foundation Error - The workspace MYWORKSPACE;DOMAIN\MYUsername already exists on computer MYPCNAME." Answer: The same local paths on your machine are mapped to 2 different workspaces, one on the preupgrade server and one on the postupgrade server.  It's not safe to have multiple workspaces on different servers mapped to the same local paths b/c you could pend some changes while connected to one server, and the other server would have no idea what you did.  You should either delete your conflicting workspaces from one of the servers (if you don't need them on both), or test the new TFS instance from a new workspace (on different machine). If you want to test an existing production workspace on both servers, then yes, you will have to mess around with the workspace cache. You don’t have to delete the entire cache, you just need to run "tf workspaces /remove:* /server:<serverurl>" to clear the cached workspaces from a server (the command won't delete the workspaces), and possibly "tf workspaces /server:<server>" to refresh the workspace cache for a given server.  You will also have to do back up and restore the workspace before switching servers or your local files could be inconsistent. From the “Microsoft Visual Studio Team Foundation Server 2010 Beta 1” forum (not available anymore?) Technorati Tags: TFS 2010,TFS Workspaces,Team System,Team Foundation Server 2010

    Read the article

  • Do you sign contracts digitally or still on paper? And what do clients think?

    - by user1162541
    We are all getting used to checking a box and putting our name in a text field to create a contract with an airline, a hosting company, or a software download. However, for some reason I am still asking clients to sign our contracts for website development on paper, and send me a scan. Few complain about this procedure, but I am personally thinking: what am I doing, doing this the old fashion way?! Signing contracts digitally would be faster, more convenient for clients and for me, and easier to store. So to me it appears to be time to start creating some contract agreement online that clients can read, then print their name, and mark a box "I AGREE WITH THIS CONTRACT AND BY PRINTING MY NAME I AGREE TO SIGNING THIS", or something like that. I would record their IP, browser data, and time of signing. If I really want to ensure their identity, I could link this to OpenID and require them to log in with their e-mail so that I can ensure that they are logged in on an existing e-mail account. Sounds OK to me. My question is: is this practice becoming a standard practice in professional IT services? Are you (as a professional) doing this? If you are, how do clients react? Any drawbacks doing this? EDIT: This question is not about the legal aspects. It is about common practices among programmers and web-development companies, and what clients think of this.

    Read the article

  • Workflow: Deploy Operating Systems

    - by Owen Allen
    The Deploy Operating Systems workflow is a workflow document that we added recently. It shows you how to get operating systems up and running in your environment. It's mostly linear, but it's a bit more complicated than some of the others. It's built around a pair of images. In both images, the left side shows the prerequisites for the whole process. Before you can deploy operating systems, you have to have Ops Center fully installed, with libraries set up and hardware already discovered. Once you've done that preparation, the first image walks you through all of the OS deployment steps. First you discover existing operating systems, then you provision Oracle Solaris 10 or Oracle Solaris 11. If you're not planning on using virtualization, then your deployment is done, and you're directed to the operate workflows. If you are interested in virtualization, though, you go on to the second image: The second image walks you through deploying virtualization, sending you to the Deploying Oracle Solaris 10 Zones, Deploying Oracle Solaris 11 Zones, or Deploying Oracle VM Server for SPARC workflows, depending on what kind of virtualization you're planning on using. Once you've done that, you're ready to go on to the operation workflows.

    Read the article

  • I don't understand how TDD helps me get a good design if I need a design to start testing it

    - by Michael Stum
    I'm trying to wrap my head around TDD, specifically the development part. I've looked at some books, but the ones I found mainly tackle the testing part - the History of NUnit, why testing is good, Red/Green/Refactor and how to create a String Calculator. Good stuff, but that's "just" Unit Testing, not TDD. Specifically, I don't understand how TDD helps me get a good design if I need a Design to start testing it. To illustrate, imagine these 3 requirements: A catalog needs to have a list of products The catalog should remember which products a user viewed Users should be able to search for a product At this points, many books pull a magic rabbit out of a hat and just dive into "Testing the ProductService", but they don't explain how they came to the conclusion that there is a ProductService in the first place. That is the "Development" part in TDD that I'm trying to understand. There needs to be an existing design, but stuff outside of entity-services (that is: There is a Product, so there should be a ProductService) is nowhere to be found (e.g., the second requirement requires me to have some concept of a User, but where would I put the functionality to remind? And is Search a feature of the ProductService or a separate SearchService? How would I know which I should choose?) According to SOLID, I would need a UserService, but if I design a system without TDD, I might end up with a whole bunch of Single-Method Services. Isn't TDD intended to make me discover my design in the first place? I'm a .net developer, but Java resources would also work. I feel that there doesn't seem to be a real sample application or book that deals with a real line of business application. Can someone provide a clear example that illustrates the process of creating a design using TDD?

    Read the article

  • 11/28 Thought Leaders Webinar: Marketing Strategies for Great Customer Experiences

    - by Charles Knapp
    With the growing use of mobile and social, it's tempting to bolt on these new channels to existing processes. However, that piecemeal approach may not lead to satisfying customer experiences or solid returns on investments. Furthermore, the volume of information businesses have access to is growing exponentially. Is this leading to better business insight and customer experiences? Join the Internet Marketing Association, The University of California at Irvine, and Oracle as we discuss marketing strategies that will help your customers have better experiences with your brand. You'll learn effective strategies for harnessing the power of "big data" to know more and understand your customers better, empowering customers and employees to make every interaction easy and rewarding, and adapting the customer experience to connect and engage effectively with each customer. Our speakers are Melissa Boxer, Vice President of Product Strategy, Oracle Cloud and CX Applications, who is a conference keynote speaker on integrated social marketing and loyalty analytics, and Dean Abbott, CEO of Abbott Analytics, who is a thought leader in commercial predictive analytics. This learning opportunity takes place on Wednesday, November 28, 11 am to 12 pm Pacific. Register today to learn from these thought leaders.

    Read the article

  • New Versions of Whitepapers are available

    - by Anthony Shorten
    The set of whitepapers that are available are progressively being updated and republished to reflect new versions of the products as well new advice for existing customers. A number of whitepapers are now available that have been updated (the My Oracle Support Doc Id is indicated): What’s New in Oracle Utilities Application Framework V4 (Doc Id: 1177265.1) -  This has been updated for the latest facilities in Oracle Utilities Application Framework V4.1. Batch Best Practices (Doc Id: 836362.1) – This has been updated for newer advice including more details of how CLUSTERED mode works, how to migrate to CLUSTERED mode and some configuration examples to cover typical configuration scenarios. Oracle Utilities Application Framework Architecture Guidelines (Doc Id: 807068.1) – This has been updated to reflect additional architecture advice. Performance Troubleshooting Guides (Doc Id: 560382.1) – This has been updated for the latest facilities in Oracle Utilities Application Framework V4.1 and includes additional techniques that have been used by customers to track performance. The whitepapers apply to all Oracle Utilities Application Framework Products which at the present time includes: Oracle Utilities Customer Care And Billing (V2.x) Oracle Enterprise Taxation Management (V2.x) Oracle Utilities Business Intelligence (V2.x) Oracle Utilities Meter Data Management (V2.x) Oracle Utilities Mobile Workforce Management (V2.x) Oracle Utilities Smart Grid Gateway (V2.x) Additional whitepapers and updates will be posted as they are available.

    Read the article

  • StreamInsight V2.0 Released!

    - by Roman Schindlauer
    The StreamInsight Team is proud to announce the release of StreamInsight V2.0! This is the version that ships with SQL 2012, and as such it has been available through Connect to SQL CTP customers already since December. As part of the SQL 2012 launch activities, we are now making V2.0 available to everyone, following our tradition of providing a separate download page. StreamInsight V2.0 includes a number of stability and performance fixes over its predecessor V1.2. Moreover it introduces a dependency on the .NET Framework 4.0, as well as on SQL 2012 license keys. For these reasons, we decided to bump the major version number, even though V2.0 does not add new features or API surface. It can be regarded a stepping stone to the upcoming release 2.1 which will contain significantly new APIs (that will depend on .NET 4.0). Head over here to download StreamInsight V2.0. The updated Books Online can be found here. Update: For instructions on how to make your existing application work against the new bits without recompilation, see here. Regards, The StreamInsight Team

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >