Search Results

Search found 4272 results on 171 pages for 'processes'.

Page 100/171 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Revolutionizing Digital Commerce

    - by bwalstra
    The confluence of the Internet, the pace of change in technology, and the demands of the value-conscious consumer are accelerating the evolution of the global digital marketplace at an unprecedented rate. Success in the new digital economy has become inextricably linked with the agility to launch innovative products, services, and new business models efficiently with minimal risk. A major obstacle to agility, and by extension to success in digital commerce, is the fact that by and large information technology (IT) infrastructure is tightly coupled with particular business models. Enterprises, through well intentioned but misconstrued costsaving belief, continue to customize existing infrastructure and create now silos to support new business models. In reality, this approach results in rigid, inflexible business processes and exposes the enterprise to unnecessary risks, higher opportunity costs, and lower profit margins. Oracle, a leading supplier of business solutions to the enterprise, is enabling the business strategies necessary to succeed in the digital economy by offering a modern, open, modular, and functionally comprehensive revenue management solution that decouples IT infrastructure from business models. Enterprises using the Oracle solution are able to focus on core competencies and innovate unimpeded, assuring that business and IT systems will seamlessly adapt to changing conditions of the digital economy. Revolutionizing Digital Commerce:  An Oracle Revenue Management Solution

    Read the article

  • JMX Monitoring of GlassFish Servers

    - by tjquinn
    Did you ever wonder what this message in your GlassFish server.log file means? JMXStartupService has started JMXConnector on JMXService URL service:jmx:rmi://192.168.2.102:8686/jndi/rmi://192.168.2.102:8686/jmxrmi It means you can monitor any GlassFish server process, remotely or locally, using any standard Java Management Extensions (JMX) client.  Examples: jconsole or jvisualvm.   Copy the part of the log message that starts with "service:" into the Add JMX Connection dialog of jvisualvm:  or into the New Connection dialog of jconsole: (The full string is truncated in the on-screen display, but if you copied from the server.log and pasted into the form it should all be there.) The examples above are for a DAS, and your host will probably be different.   The server.log files for other GlassFish servers (instances) will have similar log entries giving the JMX connection string to use for those processes.  Look for the host and/or port to be different. Note a few things about security: Here we've assumed you are using the default admin username and password.  If you are not, just enter a valid admin username and password for your installation.  Once connected, you have normal access to all the JVM statistics and controls. You can use JMX clients that support MBeans to view the GlassFish configuration.  When you connect to the DAS, you can also change that configuration, but you can only view configuration when you connect to an instance. To use a JMX client on one system to connect to a GlassFish server running on another system, you need to enable secure admin if you have not already done so: asadmin change-admin-password (respond to the prompts) asadmin enable-secure-admin asadmin restart-domain (as prompted in the output from enable-secure-admin)

    Read the article

  • Consuming Async SOA in a WebService Proxy By Anagha Desai

    - by JuergenKress
    Consider a scenario where an application is built using SOA Async processes and needs to be consumed in a WebService Proxy. In this blog, we will be demonstrating how to implement this use case. To achieve this, we will follow a two step process: Create an Async SOA BPEL process. Consume it in a WebService Proxy. Pre-requisite: Jdeveloper with SOA extension installed. Steps: To begin with step 1, create a SOA Application and name it SOA_AsyncApp. This invokes Create SOA Application wizard. In the wizard, choose composite with BPEL process in Step 3. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Anagha Desai,Async SOA,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Which language is more suitable heavy file tasks?

    - by All
    I need to write a script (based on basic functions) to process /image/audio/video files. The process is mainly filesystem tasks and converts. The database of files has been stored by mysql. The script is simple but cause heavy tasks on the system; for example renaming/converting/copying thousands of file in a run. The script does not read the content of files into memory, it just manage the commands for sub-processes. The main weight is on the communication with filesystem. The script will be used regularly for new files. My concern is about performance. I am thinking of Shell script a complied language like C Please advise which programming language is more suitable for this purpose and why? UPDATE: An example is to scan a folder for images, convert them with ImageMagick, move files to destination folder, get file info, then update the database. As you can see, the process has no room for optimization, and most of languages have similar APIs for popular programs like ImageMagick, MySQL, etc. Thus, it can be written in any language. I just wish to reduce resource usage by speeding up the long loop. NOTE: I know that questions about comparing languages are not favorable, but I really had problem to choose, because the problems can appear in action.

    Read the article

  • How to visualize the design of a program in order to communicate it to others

    - by Joris Meys
    I am (re-)designing some packages for R, and I am currently working out the necessary functions, objects, both internal and for the interface with the user. I have documented the individual functions and objects. So I have the description of all the little parts. Now I need to give an overview of how the parts fit together. The scheme of the motor so to say. I've started with making some flowchart-like graphs in Visio, but that quickly became a clumsy and useless collection of boxes, arrrows and-what-not. So hence the question: Is there specific software you can use for vizualizing the design of your program If so, care to share some tips on how to do this most efficiently If not, how do other designers create the scheme of their programs and communicate that to others? Edit: I am NOT asking how to explain complex processes to somebody, nor asking how to illustrate programming logic. I am asking how to communicate the design of a program/package, i.e.: the objects (with key features and representation if possible) the related functions (with arguments and function if possible) the interrelation between the functions at the interface and the internal functions (I'm talking about an extension package for a scripting language, keep that in mind) So something like this : But better. This is (part of) the interrelations between functions in the old package that I'm now redesigning for obvious reasons :-) PS : I made that graph myself, using code extraction tools on the source and feeding the interrelation matrix to yEd Graph Editor.

    Read the article

  • Branching and CI Builds with Agile

    - by Bob Horn
    We follow many agile processes, including automated tests, continuous integration, sprint reviews, etc... We're currently having a debate about how often we should branch release builds. We've been doing two-week sprints and trying to deploy to production at the end of each sprint. Some of us think we should be branching every sprint. Some of us think that's overkill. If a project encompasses three Visual Studio solutions, and we branch every sprint, then that's three branches, and three CI builds to create every two weeks. If we do this for six months, we'll end up with 36 branches and 36 CI builds. There is overhead involved in that. For those of us that think that branching every sprint is overkill, we don't have a very good alternative. On my last project, we deployed some solutions from the Main trunk. Yeah, that's not good, but it saved on some of the overhead. What's the right way to manage branching/releasing and CI builds, using agile, when we have such short (two-week) sprint cycles?

    Read the article

  • Advanced Level Troubleshooting for Oracle Process Manufacturing Financials

    - by Annemarie Provisero
    ADVISOR WEBCAST: Advanced Level Troubleshooting for Oracle Process Manufacturing Financials PRODUCT FAMILY: Oracle Process Manufacturing     February 16, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour session provides basic to advanced level troubleshooting information for Functional Users, System Administrators, DBAs and Customers. TOPICS WILL INCLUDE: Find Log File and Error messages for important processes in OPM Financials. Important SQL queries and filtering transaction related issues. Enabling Debug mode in OPM Financials and SLA. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques

    The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques as documented by Pfleeger. The agile methodology does promote human understanding and communication through the use of short iterative software development life cycles which forces stakeholders to review the project and adjust the project for any requirement changes.  Due to the consistent evaluations of a project and requirements, process are continually being refined, upgraded, and compared against other alternatives to ensure the best design delivered to the client. Due to the short repetitive development cycles, increased time is devoted to process management due to the fact that requirements and designs could be constantly changing. This requires additional forecasting, monitoring, and planning for each iteration. Because things can change so rapidly, automated guidance in performing process must be updated for each iteration because the environment and the available reusable process could change. In addition, the original guidance and suggestions for the project also need to be updated to account for these changes as well.   In essence the automation of process execution is supported by the agile methodology because during every iteration all processes must be tested, evaluated to ensure process integrity and compliance with the customer’s requirements. I do not think the agile approach diminishes modeling, in fact I think it increases the modeling because before the start of every development cycle, modeling must be checked for accuracy based on the changed requirements. So in essence the reduced time spent initially designing the models is in fact gained as the project completes every iteration of the project.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-27

    - by Bob Rhubart
    Understanding Oracle BI 11g Security vs Legacy Oracle BI 10g | Christian Screen "After conducting a large amount of Oracle BI 10g to Oracle BI 11g upgrades and after writing the Oracle BI 11g book," says Oracle ACE Christian Screen, "I still continually get asked one of the most basic questions regarding security in Oracle BI 11g; How does it compare to Oracle BI 10g? The trail of questions typically goes on to what are the differences? And, how do we leverage our current Oracle BI 10g security table schema in Oracle BI 11g?" Process Oracle OER Events using a simple Web Service | Bob Webster Bob Webster's post "provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types." Oracle Fusion Middleware Security: Attaching OWSM policies to JRF-based web services clients | Andre Correa "OWSM (Oracle Web Services Manager) is Oracle's recommended method for securing SOAP web services," says Oracle Fusion Middleware A-Team member Andre Correa. "It provides agents that encapsulate the necessary logic to interact with the underlying software stack on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios." His detailed post shows how to make it happen. WebCenter Content (WCC) Trace Sections | ECM Architect ECM Architect Kevin Smith shares a detailed technical post covering WebCenter Content (WCC) Trace sections. Thought for the Day "A complex system that works is invariably found to have evolved from a simple system that worked." — John Gall Source: SoftwareQuotes.com

    Read the article

  • OUM is Flexible and Scalable

    - by user535886
    Flexible and Scalable Traditionally, projects have been focused on satisfying the contents of a requirements document or rigorously conforming to an existing set of work products. Often, especially where iterative and incremental techniques have not been employed, these requirements may be inaccurate, the previous deliverables may be flawed, or the business needs may have changed since the start of the project. Fitness for business purpose, derived from the Dynamic Systems Development Method (DSDM) framework, refers to the focus of delivering necessary functionality within a required timebox. The solution can be more rigorously engineered later, if such an approach is acceptable. Our collective experience shows that applying fit-for-purpose criteria, rather than tight adherence to requirements specifications, results in an information system that more closely meets the needs of the business. In OUM, this principle is extended to refer to the execution of the method processes themselves. Project managers and practitioners are encouraged to scale OUM to be fit-for-purpose for a given situation. It is rarely appropriate to execute every activity within OUM. OUM provides guidance for determining the core set of activities to be executed, the level of detail targeted in those activities and their associated tasks, and the frequency and type of end user deliverables. The project workplan should be developed from this core. The plan should then be scaled up, rather than tailored down, to the level of discipline appropriate to the identified risks and requirements. Even at the task level, models and work products should be completed only to the level of detail required for them to be fit-for-purpose within the current iteration or, at the project level, to suit the business needs of the enterprise and to meet the contractual obligations that govern the project. OUM provides well defined templates for many of its tasks. Use of these templates is optional as determined by the context of the project. Work products can easily be a model in a repository, a prototype, a checklist, a set of application code, or, in situations where a high degree of agility is warranted, simply the tacit knowledge contained in the brain of an analyst or practitioner. For further reading on agility, see Balancing Agility and Discipline: A guide fro the Perplexed.

    Read the article

  • Do you store mysql exports in your version control tool for reverting to in event of error?

    - by Rob
    We run an internal web server with in-house software to run a manufacturing line. When new product features are to be added, either or both of the following occur: changes to the in-house server software may be required to support these - these are for significant changes in functionality, being code drive. changes to the MySQL database for new entries for the part numbers, these are for smaller changes, configurations, changes to already existing values and parameters -- such changes don't require code changes. Ideally we'd want our changes to be here rather than in item 1. Item 1 is version controlled in Subversion, so previous revisions can be referred to for rolling back to in the event of problems introduced in the latest revision. But what about changes to the MySQL database? We have quality processes to ensure that such changes are error-free but there is always a chance that errors can pass through, e.g. mistake in data entry or faults with the code that uses the MySQL corrupting the database etc. We have a automated backup every 6 hours but what if we want more manual defined checkpoints in between these intervals, we could use the same backup system but I wondered if folks here used other methods to store previous states of databases, e.g. exporting the database as a plain text SQL dump -- at least with this method it would be possible to see diffs e.g. in Beyond Compare for trouble shooting. Thoughts?

    Read the article

  • Three Fusion Applications Communities are Now Live

    - by cwarticki
    The Fusion Application Support Team (FAST) launched three communities on the My Oracle Support Community.  These communities provide another channel for customers to get the information about Fusion Applications that they need. The three Fusion Applications communities are: ·     Technical - FA community -- covers all the Fusion Applications technology stack and technical questions from users. ·      Applications and Business Processes community -- covers all the functional questions and issues raised by users for all Fusion Applications except HCM. ·      Fusion Applications HCM community -- covers the functional questions and issues raised by users for Fusion HCM product family. Good for Our Customers Customers participating in these communities can ask questions and get timely responses from Oracle Fusion Applications experts who monitor the communities. The customers can search the Fusion Applications Community contents for information and answers. They also can collaborate with other customers and benefit from the collective experience of the community -- especially from people like you. All customers and partners are invited to join My Oracle Support Community for Fusion Applications. We believe that participating in the Fusion Applications communities can be a win-win option for everyone. We invite you to become an active part of the thriving Fusion Applications communities and experience how this interesting and insightful dialog can benefit you. How to Join the Community Navigate to http://communities.oracle.com. Click the Profile Tab to register yourself and edit your profile. ·         You can subscribe to the Fusion Applications communities by editing your Community Subscriptions. ·         You can get RSS feeds for each of your subscribed communities from the same section.

    Read the article

  • What follows after lexical analysis?

    - by madflame991
    I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads "{" increment the alignment variable by 1, and decrement by 1 for "}". Whenever it starts printing on a new line the code printer will align according to this alignment variable) In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the "procedure" or "function" keyword until you hit the first ")" (this should work fine in a Pascal language with no nested subroutines) In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: "var a, b, c: integer") Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing.

    Read the article

  • How to REALLY start thinking in terms of objects?

    - by Mr Grieves
    I work with a team of developers who all have several years of experience with languages such as C# and Java. Most of them are young enough to have been shown OOP as a standard way to develop software in university and are very comfortable with concepts such as inheritance, abstraction, encapsulation and polymorphism. Yet, many of them, and I have to include myself, still tend to create classes which are meant to be used in a very functional fashion. The resulting software is often several smaller classes which correctly represent business objects which get passed through larger classes which only supply ways to modify and use those objects (functions). Large complex difficult-to-maintain classes named Manager are usually the result of such behaviour. I can see two theoretical reasons why people might write this type of code: It's easy to start thinking of everything in terms of the database Deep down, for me, a computer handling a web request feels more like a functional operation than an object oriented operation when you think about Request Handlers, Threads, Processes, CPU Cores and CPU operations... I want source code which is easy to read and easy to modify. I have seen excellent examples of OO code which meet these objectives. How can I start writing code like this? How I can I really start thinking in an object oriented fashion? How can I share such a mentality with my colleagues?

    Read the article

  • Profit : August, 2012

    - by user462779
    August 2012 issue of Profit is now available online. Way back in 2003, I wrote my first feature for Profit. It was titled “Everything You Always Wanted to Know About Application Servers (But Were Afraid To Ask),” and it discussed “cutting-edge” technologies like portals and XML and the brand-new Java Platform, Enterprise Edition (Java EE; we’re now on Java EE 7). But despite the dated terms I used in my Profit debut, I noticed something in rereading that old story that has stayed constant: mid-tier technology is where innovative enterprise IT projects happen. It may have been XML in 2003, but it’s SOA in 2012. While preparing the August issue of Profit was more than just a stroll down memory lane for me, it has provided a nice bit of perspective about what changes and what doesn’t in this dynamic IT industry. Technologies continuously evolve—some become standard practice, some are revived or reinvented, and some are left by the wayside. But the drive to innovate and the desire to succeed are business principles that never go out of fashion. Also, be sure to check out the Profit JD Edwards Special Issue 2012 (PDF), featuring partner profiles, customer successes, and Oracle executive interviews. The Middleware Advantage Three ways a flexible, integrate software layer can deliver a competitive edge Playing to Win Electronic Arts’ superefficient hub processes millions of online gaming transactions every day. Adjustable Loans With Oracle Exadata, Reliance Commercial Finance keeps pace with India’s commercial loan market. Future Proof To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate. Spring Training Knowledge and communication help Jackson Hewitt’s Tim Bechtold get seasonal workers in top shape. Keeping Online Customers Happy Customers worldwide are comfortable with online service—but are companies meeting customers’ needs?

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • Apache still running after uninstalling

    - by Ruslan Osipov
    I am trying to uninstall apache to install nginx, but it doesn't seem to work. $ ps aux | grep httpd root 22348 0.0 0.2 167252 8864 ? Ss 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22353 0.0 0.1 167624 6088 ? S 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22354 0.0 0.1 167252 5292 ? S 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22355 0.0 0.1 167252 5052 ? S 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22356 0.0 0.1 167252 5052 ? S 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22357 0.0 0.1 167252 5052 ? S 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22797 0.0 0.1 167252 5052 ? S 14:38 0:00 /usr/sbin/httpd -k start -DSSL 1003 22883 0.0 0.0 9388 884 pts/1 S+ 14:46 0:00 grep httpd $ which apache2 $ dpkg -S apache bash-completion: /etc/bash_completion.d/apache2ctl apparmor: /etc/apparmor.d/abstractions/apache2-common $ dpkg -S `which httpd` dpkg-query: no path found matching pattern /usr/sbin/httpd. The package seem to be uninstalled, but the processes are still running. And /usr/bin/httpd is still there. Any hints?

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • Today's Links (6/23/2011)

    - by Bob Rhubart
    Lydia Smyers interviews Justin "Mr. OTN" Kestelyn on the Oracle ACE Program Justin Kestelyn describes the Oracle ACE program, what it means to the developer community, and how to get involved. Incremental Essbase Metadata Imports Now Possible with OBIEE 11g | Mark Rittman "So, how does this work, and how easy is it to implement?" asks Oracle ACE Director Mark Rittman, and then he dives in to find out. ORACLENERD: The Podcast Oracle ACE Chet "ORACLENERD" Justice recounts his brush with stardom on Christian Screen's The Art of Business Intelligence podcast. Bay Area Coherence Special Interest Group Next Meeting July 21, 2011 | Cristóbal Soto Soto shares information on next month's Bay Area Coherence SIG shindig. New Cloud Security Book: Securing the Cloud by Vic Winkler | Dr Cloud's Flying Software Circus "Securing the Cloud is the most useful and informative about all aspects of cloud security," says Harry "Dr. Cloud" Foxwell. Oracle MDM Maturity Model | David Butler "The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization," says Butler. Integrating Strategic Planning for Cloud and SOA | David Sprott "Full blown Cloud adoption implies mature and sophisticated SOA implementation and impacts many business processes," says Sprott.

    Read the article

  • Software development process for a part time University project for 1 developer?

    - by Pricey
    I will be doing a part time University project soon and the time frame for it is around 8 months with approximately 10-15 hours a week spent working on it, with a review by a tutor each quarter. My question is what software development process would you recommend using when the course requires you to work on your own in order to manage yourself as well as the project? I wanted to use a weekly or bi-weekly iterative approach to my work but a lot of the processes seem tailored to teams of people. I am looking at XP (Extreme Programming) OR Scrum as something that is less than the norm for University work but again Scrum I don't know a lot about yet, and a question I have is; can you say you are doing XP without pair-programming? because my tutor seems to think that I have to stick to all the practices otherwise I can't do it (nevermind if I am working alone). We can have external user input as well but due to the small timescales with part time work it may be more beneficial for myself to be the user as well, which is not what I prefer considering how I can get lost in the design.

    Read the article

  • MapReduce

    - by kaleidoscope
    MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of  intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data,  scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Example: A process to count the appearances of each different word in a set of documents void map(String name, String document):   // name: document name   // document: document contents   for each word w in document:     EmitIntermediate(w, 1); void reduce(String word, Iterator partialCounts):   // word: a word   // partialCounts: a list of aggregated partial counts   int result = 0;   for each pc in partialCounts:     result += ParseInt(pc);   Emit(result); Here, each document is split in words, and each word is counted initially with a "1" value by the Map function, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce, thus this function just needs to sum all of its input values to find the total appearances of that word.   Sarang, K

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

  • USB blocks suspend on a Gigabyte GA-890GPA-UD3H with ATI SB700/SB800

    - by poolie
    Following on from question 12397, I'd still like to get suspend working on my Phenom II X6 / GA-890GPA desktop machine running current Maverick. When I run pmi action suspend the machine doesn't crash, but it also doesn't suspend. The kernel logs show: PM: Syncing filesystems ... done. PM: Preparing system for mem sleep Freezing user space processes ... (elapsed 0.02 seconds) done. Freezing remaining freezable tasks ... (elapsed 0.01 seconds) done. PM: Entering mem sleep Suspending console(s) (use no_console_suspend to debug) pm_op(): usb_dev_suspend+0x0/0x20 returns -2 PM: Device usb8 failed to suspend async: error -2 PM: Some devices failed to suspend PM: resume of devices complete after 0.430 msecs PM: resume devices took 0.000 seconds PM: Finishing wakeup. Restarting tasks ... done. PM: Syncing filesystems ... I've tried disconnecting all the USB devices, and then connecting in to run pmi over ssh, and I get the same failure. With everything unplugged, I see the following usb devices: Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub and lspci shows the physical devices are: 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:16.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:16.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 02:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) Booting with no_console_suspend makes no difference.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >