Search Results

Search found 28550 results on 1142 pages for 'business objects sdk'.

Page 2/1142 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • PASS Business Intelligence Virtual Chapter Upcoming Sessions (November 2013)

    - by Sergio Govoni
    Let me point out the upcoming live events, dedicated to Business Intelligence with SQL Server, that PASS Business Intelligence Virtual Chapter has scheduled for November 2013. The "Accidental Business Intelligence Project Manager"Date: Thursday 7th November - 8:00 PM GMT / 3:00 PM EST / Noon PSTSpeaker: Jen StirrupURL: https://attendee.gotowebinar.com/register/5018337449405969666 You've watched the Apprentice with Donald Trump and Lord Alan Sugar. You know that the Project Manager is usually the one gets firedYou've heard that Business Intelligence projects are prone to failureYou know that a quick Bing search for "why do Business Intelligence projects fail?" produces a search result of 25 million hits!Despite all this… you're now Business Intelligence Project Manager – now what do you do?In this session, Jen will provide a "sparks from the anvil" series of steps and working practices in Business Intelligence Project Management. What about waterfall vs agile? What is a Gantt chart anyway? Is Microsoft Project your friend or a problematic aspect of being a BI PM? Jen will give you some ideas and insights that will help you set your BI project right: assess priorities, avoid conflict, empower the BI team and generally deliver the Business Intelligence project successfully! Dimensional Modelling Design Patterns: Beyond BasicsDate: Tuesday 12th November - Noon AEDT / 1:00 AM GMT / Monday 11th November 5:00 PM PSTSpeaker: Jason Horner, Josh Fennessy and friendsURL: https://attendee.gotowebinar.com/register/852881628115426561 This session will provide a deeper dive into the art of dimensional modeling. We will look at the different types of fact tables and dimension tables, how and when to use them. We will also some approaches to creating rich hierarchies that make reporting a snap. This session promises to be very interactive and engaging, bring your toughest Dimensional Modeling quandaries. Data Vault Data Warehouse ArchitectureDate: Tuesday 19th November - 4:00 PM PST / 7 PM EST / Wednesday 20th November 11:00 PM AEDTSpeaker: Jeff Renz and Leslie WeedURL: https://attendee.gotowebinar.com/register/1571569707028142849 Data vault is a compelling architecture for an enterprise data warehouse using SQL Server 2012. A well designed data vault data warehouse facilitates fast, efficient and maintainable data integration across business systems. In this session Leslie and I will review the basics about enterprise data warehouse design, introduce you to the data vault architecture and discuss how you can leverage new features of SQL Server 2012 help make your data warehouse solution provide maximum value to your users. 

    Read the article

  • Oracle Business Intelligence integration with Oracle Open Office

    - by Harald Behnke
    A highlight of the latest Oracle Office product launches are the first Oracle application connectors introduced with Oracle Open Office 3.3. The Oracle Open Office Connector for Oracle Business Intelligence perfectly demonstrates the advantages of enterprise and office productivity software engineered to work together. The connector enables you to access and run Oracle Business Intelligence Enterprise Edition requests directly within Oracle Open Office. The refreshable requests leverage not only native Open Office functionality but also the scalability and performance of the Oracle Oracle Business Intelligence server (R10.x). The requests reference a single source of information as defined in the Oracle Business Intelligence server data thus ensuring consistent information across the enterprise. See how it works in the demo video: Beyond the dramatic license cost savings for Oracle Business Intelligence customers using Oracle Open Office, the joint engineering efforts result in usability and efficiency benefits not available with Microsoft Office: Import styles and conditional formats defined in Business Intelligence answersApply customized styles, direct or conditional formats to Oracle Business Intelligence data - all changes are preserved during refreshChange chart properties for Oracle Open Office charts - all changes are preserved during refresh Read more about the Oracle Open Office enterprise features.

    Read the article

  • iPhone SDK 3/4 App will not run on a iPhone 2.x device even with deployment target set to 2.0!

    - by MarqueIV
    Ok... I know about the difference between the base/active SDKs and the deployment target. I have my base SDK set at 4.0 and the deployment target set at 2.0. I am not using any APIs post 2.x, conditional or otherwise. Since I can't debug on a 2.x device, after building it, I use the iPhone Configuration Utility to install the app on the device, which it does just fine. Problem is, it doesn't run! I just get a blank screen. The main window never comes up! Now before you ask... I had this same problem with the iPhone SDK 3.x. I upgraded to the 4.x hoping it would be solved. It wasn't. Yes the provisioning profile is installed. (Couldn't install the app if it wasn't.) This same compiled app works fine on 3.x devices. Same with 4.x devices. Just not 2.x devices. Again, no I am not using any post-2.x SDKs. To prove this I created a brand-new, window-based app from the 'New Project' dialog and the only changes I made was the background color of the window (to prove the XIB loaded) and I set the deployment target to 2.0 (It's still compiled against the 4.x SDK though.) Again, it runs fine on 3.x or 4.x devices, but just a black, blank screen on 2.x devices. I've tried this on three separate 2.x devices included one freshly restored. I've used three separate dev machines (MacBook Pro with the 3.x SDK, MacBook Pro with the 4.x SDK and a Mac Pro with the 3.x SDK.) Same result every time. I am stumped. The fact that even an unmodified project doesn't run really has me confused. Could it be the XIB file? Did they change the format from 2.x to something newer in the 3.x SDK? If so, how do I set it back to 2.x. (Again, this is just a complete guess.) But I'm really stumped! Mark

    Read the article

  • Project structure: where to put business logic

    - by Mister Smith
    First of all, I'm not asking where does business logic belong. This has been asked before and most answers I've read agree in that it belongs in the model: Where to put business logic in MVC design? How much business logic should be allowed to exist in the controller layer? How accurate is "Business logic should be in a service, not in a model"? Why put the business logic in the model? What happens when I have multiple types of storage? However people disagree in the way this logic should be distributed across classes. There seem to exist three major currents of thought: Fat model with business logic inside entity classes. Anemic model and business logic in "Service" classes. It depends. I find all of them problematic. The first option is what most Fowlerites stick to. The problem with a fat model is that sometimes a business logic funtion is not only related to a class, and instead uses a bunch of other classes. If, for example, we are developing a web store, there should be a function that calcs an order's total. We could think of putting this function inside the Order class, but what actually happens is that the logic needs to use different classes, not only data contained in the Order class, but also in the User class, the Session class, and maybe the Tax class, Country class, or Giftcard, Payment, etc. Some of these classes could be composed inside the Order class, but some others not. Sorry if the example is not very good, but I hope you understand what I mean. Putting such a function inside the Order class would break the single responsibility principle, adding unnecesary dependences. The business logic would be scattered across entity classes, making it hard to find. The second option is the one I usually follow, but after many projects I'm still in doubt about how to name the class or classes holding the business logic. In my company we usually develop apps with offline capabilities. The user is able to perform entire transactions offline, so all validation and business rules should be implemented in the client, and then there's usually a background thread that syncs with the server. So we usually have the following classes/packages in every project: Data model (DTOs) Data Access Layer (Persistence) Web Services layer (Usually one class per WS, and one method per WS method). Now for the business logic, what is the standard approach? A single class holding all the logic? Multiple classes? (if so, what criteria is used to distribute the logic across them?). And how should we name them? FooManager? FooService? (I know the last one is common, but in our case it is bad naming because the WS layer usually has classes named FooWebService). The third option is probably the right one, but it is also devoid of any useful info. To sum up: I don't like the first approach, but I accept that I might have been unable to fully understand the Zen of it. So if you advocate for fat models as the only and universal solution you are welcome to post links explaining how to do it the right way. I'd like to know what is the standard design and naming conventions for the second approach in OO languages. Class names and package structure, in particular. It would also be helpful too if you could include links to Open Source projects showing how it is done. Thanks in advance.

    Read the article

  • Use of Business Parameters in BPM12c

    - by Abhishek Mittal-Oracle
    With the release of BPM12c, a new feature to use Business Parameters is introduced through which we can define a business parameter which will behave as a global variable which can be used within BPM project. Business Administrator can be the one responsible to modify the business parameters value dynamically at run-time which may bring change in BPM process flow where it is used.This feature was a part of BPM10g product and was extensively used. In BPM11g, this feature is not present currently.Business Parameters can be defined in 2 ways:1. Using Jdev to define business parameters, and 2. Using BPM workspace to define business parameters.It is important to note that business parameters need to be mapped with a valid organisation unit defined in a BPM project. If the same is not handled, exceptions like 'BPM-70702' will be thrown by BPM Engine. This is because business parameters work along with organisation defined in a BPM project.At the same time, we can use same business parameter across different organisation units with different values. Business Parameters in BPM12c has this capability to handle multiple values with different organisation units defined in a single BPM project. This enables business to re-use same business parameters defined in a BPM project across different organisations.Business parameters can be defined using the below data types:1. int2. string 3. boolean4. double While defining an business parameter, it is mandatory to provide a default value. Below are the steps to define a business parameter in Jdev: Step 1:  Open 'Organization' and click on 'Business Parameters' tab.Step 2:  Click on '+' button.Step 3: Add business parameter name, type and provide default value(mandatory).Step 4: Click on 'OK' button.Step 5: Business parameter is defined. Below are the steps to define a business parameter in BPM workspace: Step 1: Login to BPM workspace using admin-username and password.Step 2: Click on 'Administration' on the right top side of workspace.Step 3: Click on 'Business Parameters' in the left navigation panel under 'Organization'. Step 4:  Click on '+' button.Step 5: Add business parameter name, type and provide default value(mandatory).Step 6: Click on 'OK' button.Step 7: Business parameter is defined. Note: As told earlier in the blog, it is necessary to define and map a valid organization ID with predefined variable 'organizationalUnit' under data associations in an BPM process before the business parameter is used. I have created one sample PoC demonstrating the use of Business Parameters in BPM12c and it can be found here.

    Read the article

  • Development: SDK for Social Net

    - by loldop
    I have a task: development sdk for social networking service like facebook, twitter and etc. At now i'm developing facebook-extension-sdk which based on facebook-ios-sdk 3.0. But not all social networking services have good sdks. And all time i improved my facebook-extension-sdk, when i see ugly code :( Please, advise me good techniques to development these sdks (like design-patterns or your own experience or good books/sites). Thanks!

    Read the article

  • Unique Business Value vs. Unique IT

    - by barry.perkins
    When the age of computing started, technology was new, exciting, full of potential and had a long way to grow. Vendor architectures were proprietary, and limited in function at first, growing in capability and complexity over time. There were few if any "standards", let alone "open standards" and the concepts of "open systems", and "open architectures" were far in the future. Companies employed intelligent, talented and creative people to implement the best possible solutions for their company. At first, those solutions were "unique" to each company. As time progressed, standards emerged, companies shared knowledge, business capability supplied by technology grew, and companies continued to expand their use of technology. Taking advantage of change required companies to struggle through periodic "revolutionary" change cycles, struggling through costly changes that were fraught with risk, resulted in solutions with an increasingly shorter half-life, and frequently required altering existing business processes and retraining employees and partner businesses. The pace of technological invention and implementation grew at an ever increasing rate, making the "revolutionary" approach based upon "proprietary" or "closed" architectures or technologies no longer viable. Concurrent with the advancement of technology, the rate of change in business increased, leading us to the incredibly fast paced, highly charged, and competitive global economy that we have today, where the most successful companies are companies that are good at implementing, leveraging and exploiting change. Fast forward to today, a world where dramatic changes in business and technology happen continually, a world where "evolutionary" change is crucial. Companies can no longer afford to build "unique IT", nor can they afford regular intervals of "revolutionary" change, with the associated costs and risks. Human ingenuity was once again up to the task, turning technology into a platform supporting business through evolutionary change, by employing "open": open standards; open systems; open architectures; and open solutions. Employing "open", enables companies to implement systems based upon technology, capability and standards that will evolve over time, providing a solid platform upon which a company can drive business needs, requirements, functions, and processes down into the technology, rather than exposing technology to the business, allowing companies to focus on providing "unique business value" rather than "unique IT". The big question! Does moving from "older" technology that no longer meets the needs of today's business, to new "open" technology require yet another "revolutionary change"? A "revolutionary" change with a short half-life, camouflaging reality with great marketing? The answer is "perhaps". With the endless options available to choose from, it is entirely possible to implement a solution that may work well today, but in 5 years time will become yet another albatross for the company to bear. Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution. There is no such thing as a "1 size fits all" IT solution for business. If all companies implemented business solutions based upon technology that required, or forced the same business processes across all businesses in an industry, it would be extremely difficult to show competitive advantage through "unique business value". It would be equally difficult to "evolve" to meet or exceed business needs and keep up with today's rapid pace of change. How does one ensure that they do not jump from one trap directly into another? Or to put it positively, there are solutions available today that can address these challenges and issues. How does one ensure that the buying decision of today will serve the business well for years into the future? Intelligent & Informed decisions - "buying right" In a previous blog entry, we discussed the value of linking tactical to strategic The key is driving the focus to what is best for your business, handling today's tactical issues while also aligning with a roadmap/strategy that is tightly aligned with your strategic business objectives. When considering the plethora of possible options that provide various approaches to solving today's complex business problems, it is extremely important to ensure that vendors supplying those options, focus on what is best for your business, supplying sufficient information, providing adequate answers to questions, addressing challenges, issues, concerns and objections honestly and openly, and focus on supplying solutions that are tailored for, and deliver the most business value possible for your business. Here are a few questions to consider relative to the proposed options that should help ensure that today's solution doesn't become tomorrow's problem. Do the proposed solutions: Solve the problem(s) you are trying to address? Provide a solid foundation upon which to grow/enhance your business? Provide tactical gains that align with and enable your strategic business goals/objectives? Provide an infrastructure that can be leveraged with subsequent projects? Solve problems for the business overall, the lines of business, or just IT? Simplify your current environment Provide the basis for business: Efficiency Agility Clarity governance, risk, compliance real time business visibility and trend analysis Does your IT staff have the knowledge/experience to successfully manage the proposed systems once they are deployed in production? Done well, you will be presented with options tailored to your business, that enable you to drive the "unique business value" necessary to help your business stand out from others, creating a distinct competitive advantage, delivering what your customers need, when they need it, so you can attract new customers, new business, and grow top line revenue, all at a cost that provides a strong Return on Investment/Return on Assets. The net result is growth with managed cost providing significantly improved profit margin and shareholder value.

    Read the article

  • Social Technology and the Potential for Organic Business Networks

    - by Michael Snow
    Guest Blog Post by:  Michael Fauscette, IDCThere has been a lot of discussion around the topic of social business, or social enterprise, over the last few years. The concept of applying emerging technologies from the social Web, combined with changes in processes and culture, has the potential to provide benefits across the enterprise over a wide range of operations impacting employees, customers, partners and suppliers. Companies are using social tools to build out enterprise social networks that provide, among other things, a people-centric collaborative and knowledge sharing work environment which over time can breakdown organizational silos. On the outside of the business, social technology is adding new ways to support customers, market to prospects and customers, and even support the sales process. We’re also seeing new ways of connecting partners to the business that increases collaboration and innovation. All of the new "connectivity" is, I think, leading businesses to a business model built around the concept of the network or ecosystem instead of the old "stand-by-yourself" approach. So, if you think about businesses as networks in the context of all of the other technical and cultural change factors that we're seeing in the new information economy, you can start to see that there’s a lot of potential for co-innovation and collaboration that was very difficult to arrange before. This networked business model, or what I've started to call “organic business networks,” is the business model of the information economy.The word “organic” could be confusing, but when I use it in this context, I’m thinking it has similar traits to organic computing. Organic computing is a computing system that is self-optimizing, self-healing, self-configuring, and self-protecting. More broadly, organic models are generally patterns and methods found in living systems used as a metaphor for non-living systems.Applying an organic model, organic business networks are networks that represent the interconnectedness of the emerging information business environment. Organic business networks connect people, data/information, content, and IT systems in a flexible, self-optimizing, self-healing, self-configuring, and self-protecting system. People are the primary nodes of the network, but the other nodes — data, content, and applications/systems — are no less important.A business built around the organic business network business model would incorporate the characteristics of a social business, but go beyond the basics—i.e., use social business as the operational paradigm, but also use organic business networks as the mode of operating the business. The two concepts complement each other: social business is the “what,” and the organic business network is the “how.”An organic business network lets the business work go outside of traditional organizational boundaries and become the continuously adapting implementation of an optimized business strategy. Value creation can move to the optimal point in the network, depending on strategic influencers such as the economy, market dynamics, customer behavior, prospect behavior, partner behavior and needs, supply-chain dynamics, predictive business outcomes, etc.An organic business network driven company is the antithesis of a hierarchical, rigid, reactive, process-constrained, and siloed organization. Instead, the business can adapt to changing conditions, leverage assets effectively, and thrive in a hyper-connected, global competitive, information-driven environment.To hear more on this topic – I’ll be presenting in the next webcast of the Oracle Social Business Thought Leader Webcast Series - “Organic Business Networks: Doing Business in a Hyper-Connected World” this coming Thursday, June 21, 2012, 10:00 AM PDT – Register here

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Should I have seperate business and personal websites?

    - by Thomas Clowes
    I have my business website - I am a web designer and developer, and also buy/sell websites/domain names. As such my website links to 'Our sites' = the websites which we design and run as well as a variety of tools such as a domain whois tool. These are obviously relevant to the business. As an individual, I like to travel and do white water kayaking as a hobby. I also have a degree in economics. I have thus created a blog on my business website where I write about domain names, web design, kayaking, travelling and economics. I've just begun researching SEO and am looking into optimizing my business website. I don't actually directly offer any services to clients at the moment, my main aim is to have a business website which supports my websites. If for example a potential advertise on one of my sites checks out the business website, I want them to think professional, down to earth, quirky. Given this is having my business/personal interests intertwined a problem? For SEO.. on my homepage for example when I'm writing a headline and a paragraph about what we do.. what do I put? and how do I optimize for SEO with keywords and the like? Further to the above, my company sponsors me and a group of accquantances as a kayaking team.. as such my personal interests do sort of overlap (just to add a complexity :))

    Read the article

  • SQL SERVER – What the Business Says Is Not What the Business Wants

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Steve Jones. Steve raised a very interesting question; every DBA and Database Developer has already faced this situation. When I read the topic, I felt that I can write several different examples here. Today, I will cover this scenario, which seems quite amusing. Shrinking Database Earlier this year, I was working on SQL Server Performance Tuning consultancy; I had faced very interesting situation. No matter how much I attempt to reduce the fragmentation, I always end up with heavy fragmentation on the server. After careful research, I figured out that one of the jobs was continuously Shrinking the Database – which is a very bad practice. I have blogged about my experience over here SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server. I removed the incorrect shrinking process right away; once it was removed, everything continued working as it should be. After a couple of days, I learned that one of their DBAs had put back the same DBCC process. I requested the Senior DBA to find out what is going on and he came up with the following reason: “Business Requirement.” I cannot believe this! Now, it was time for me to go deep into the subject. Moreover, it had become necessary to understand the need. After talking to the concerned people here, I understood what they needed. Please read the exact business need in their own language. The Shrinking “Business Need” “We shrink the database because if we take backup after shrinking the database, the size of the same is smaller. Once we take backup, we have to send it to our remote location site. Our business requirement is that we need to always make sure that the file is smallest when we transfer it to remote server.” The backup is not affected in any way if you shrink the database or not. The size of backup will be the same. After a couple of the tests, they agreed with me. Shrinking will create performance issues for the same as it will introduce heavy fragmentation in the database. The Real Solution The real business need was that they needed the smallest possible backup file. We finally implemented a quick solution which they are still using to date. The solution was compressed backup. I have written about this subject in detail few years before SQL SERVER – 2008 – Introduction to New Feature of Backup Compression. Compressed backup not only creates a small filesize but also increases the speed of the database as well. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Discoverer 11g (11.1.1.2) Certified with E-Business Suite

    - by Steven Chan
    Discoverer is an ad-hoc query, reporting, analysis, and Web-publishing tool that allows end-users to work directly with Oracle E-Business Suite OLTP data.Discoverer 11g (11.1.1.2) is now certified with Oracle E-Business Suite Release 11i and 12.The latest release of Oracle Business Intelligence Discoverer 11g offers new functionality, including integration with Oracle Business Intelligence Enterprise Edition (OBIEE), published Discoverer Webservice APIs, integration with Oracle WebCenter, integration with Oracle WebLogic Server, integration with Enterprise Manager (Fusion Middleware Control) and improved performance and scalability.

    Read the article

  • Discoverer 11.1.1.4 Certified with E-Business Suite

    - by Steven Chan
    Oracle Business Intelligence Discoverer is an ad-hoc query, reporting, analysis, and Web-publishing tool that allows end-users to work directly with Oracle E-Business Suite OLTP data.Discoverer 11g (11.1.1.4) is now certified with Oracle E-Business Suite Release.  Discoverer 11.1.1.4 is part of Oracle Fusion Middleware 11g Release 1 Version 11.1.1.4.0, also known as FMW 11g Patchset 3.  Certified E-Business Suite releases are:EBS Release 11i 11.5.10.2 + ATG RUP 7 and higherEBS Release 12.0.6 and higherEBS Release 12.1.1 and higher

    Read the article

  • E-Business Suite at OpenWorld

    - by [email protected]
    Did you know...Oracle E-Business Suite Release 12.1 offers nine new solutions and more than 400 enhancements across human resources, supply chain management, procurement, projects, master data management, customer relationship management, and financials? With over 150 session dedicated to E-Business Suite, at OpenWorld, you can learn all about Release 12.1. Follow this link to the OpenWorld content catalog to get a list of session for E-Business Suite. Or this one to get more information on Oracle E-Business Suite Release 12.1

    Read the article

  • Problem with pre-beta sdk - even after reinstalling SDK 3.2

    - by Anders Brenna
    I'm trying to upload the binary for a new app, but always get this errormessage: "The binary you uploaded was invalid. A pre-release beta version of the SDK was used to build the application." I know several people have asked a similar question, but I've tried all suggestions from the answers there without success. I used the XCode 4.0 beta 3 during development, and I've tried using it to compile for earlier releases (3.0, 3.1.3, 3.2 etc...) I've also tried downgrading to SDK 3.2, as well as removing 4.0 beta 3 and then installing SDK 3.2 as a fresh install. It seems to me that there might be some parameter in the "Edit Project Settings" that is sticking from the use of 4.0 beta 3, but I've tried to identify them without success. My last option seem to be a complete reinstall of both OS and SDK. Is there something else I might try first?

    Read the article

  • Creating Multiple Queries for Running Objects

    - by edurdias
    Running Objects combines the power of LINQ with Metadata definition to let you leverage multiples perspectives of your queries of objects. By default, RO brings all the objects in natural order of insertion and including all the visible properties of your class. In this post, we will understand how the QueryAttribute class is structured and how to make use of it. The QueryAttribute class This class is the responsible to specify all the possible perspectives of a list of objects. In other words, is...(read more)

    Read the article

  • Implementation of a Rules Engine in Your Business Applicaitons

    - by enonu
    I'm for an experience driven answer from a few software engineers who have implemented a rules engine in their internal business applications. How has it affected your business in the following ways: Ability to launch and iterate over business driven logic Ability to have "business users" perform the actual modification of those rules rather than developers. Ability to comprehend the business rules in general. Quality of the software releases. More or less bugs from the end-user's POV? Speed of the applications. If you had to do it all over again, what would you do differently? Lastly, I'm looking for a qualification of your answer w/ respect to the architecture. Would you do the same thing if you were deploying to a 1-machine setup vs. your architecture vs. a multi-tier cloud-based distributed architecture using 1000s of machines? How would it be different? Thanks!

    Read the article

  • design practice for business layer when supporting API versioning

    - by user1186065
    Is there any design pattern or practice recommended for business layer when dealing with multiple API version. For example, I have something like this. http://site.com/blogs/v1/?count=10 which calls business object method GetAllBlogs(int count) to get information http://site.com/blogs/v2/?blog_count=20 which calls business object method GetAllBlogs_v2(int blogCounts) Since parameter name is changed, I created another business method for version 2. This is just one example but it could have other breaking changes for which it requires me to create another method to support both version. Is there any design pattern or best practice for business/data access layer I should follow when supporting API Versioning?

    Read the article

  • What resources about business should an internal IT department programmer be familiar with

    - by Badger
    I am developer / analyst in an internal IT department at a medium sized business. I have to deal with business people all the time and many of the things I create can have profound impacts on the business. I am starting to regret not taking any business classes in college because I don't understand the first thing about running a business so I don't always understand what people are wanting, the best I can do is "think through it". Does anyone have suggested methods of learning this stuff, maybe some resources. And please don't just say to ask people who work here. I have tried that before and I get no where.

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Localizing Business Application

    To continue our series, lets look at localizing our business applications.  In todays global village, it is often no longer OK to support only one language.   Many real world business applications need to support multiple languages.  To demonstrate the pattern, lets look at localizing the Silverlight Business Application Template.   You can download the completed solution.   Here it is in English side-by-side with a localized version (notice the Hebrew is rendered...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle Database 11gR2 11.2.0.3 Certified with E-Business Suite on Windows

    - by John Abraham
    As a follow up to our original certification announcement, Oracle Database 11g Release 2 (11.2.0.3) is now certified with Oracle E-Business Suite Release 11i and Release 12 on the following Microsoft Windows operating systems: Release 12.1 (12.1.1 and higher) Microsoft Windows Server (32-bit) (2003, 2008) Microsoft Windows x64 (64-bit) (20031, 20081, 2008 R22) Release 12.0 (12.0.4 and higher) Microsoft Windows Server (32-bit) (2003) Microsoft Windows x64 (64-bit) (2003, 2008)1 Release 11i (11.5.10.2 + ATG PF.H RUP 6 and higher) Microsoft Windows Server (32-bit) (2003, 20081) Microsoft Windows x64 (64-bit) (2003, 2008, 2008 R2)1 Notes 1: This OS is a 'database tier only' or 'split tier configuration' platform where the application tier must be on a fully certified E-Business Suite platform. 2: This OS is a 'database tier only' platform for Release 11i. For 12.1.1 or higher, it is also supported on the application tier via the migration process outlined in My Oracle Support Document 1188535.1. Pending Certification E-Business Suite 12.0 with 11.2.0.3 Split Tier Certification on Microsoft Windows x64 (64-bit) (2008 R2) is in progress and will be announced separately. This announcement for Oracle E-Business Suite 11i and R12 includes: Real Application Clusters (RAC) Oracle Database Vault Transparent Data Encryption (Column Encryption) TDE Tablespace Encryption Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for Oracle E-Business Suite Release 11i and Release 12 Database Instances Transportable Database and Transportable Tablespaces Data Migration Processes for Oracle E-Business Suite Release 11i and Release 12 References MOS Document 881505.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) MOS Document 1058763.1 - Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) MOS Document 1091086.1 - Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 MOS Document 1091083.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 MOS Document 216205.1 - Database Initialization Parameters for Oracle E-Business Suite 11i MOS Document 396009.1 - Database Initialization Parameters for Oracle Applications Release 12 MOS Document 823586.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i MOS Document 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 MOS Document 403294.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 11i MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 828223.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 11i MOS Document 828229.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 12 MOS Document 391248.1 - Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 557738.1 - Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 741818.1 - Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 1366265.1 - Using Transportable Tablespaces to Migrate Oracle Applications 11i Using Oracle Database 11g Release 2 MOS Document 1311487.1 - Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 MOS Document 729309.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 11i Using Oracle Database 10g Release 2 or 11g MOS Document 734763.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 10g Release 2 or 11g MOS Document 1188535.1 - Migrating Oracle E-Business Suite R12 to Microsoft Windows Server 2008 R2 Please also review the platform-specific Oracle Database Installation Guides for operating system and other prerequisites. Related Articles Database 11.2.0.2 Certified with EBS R12 on IBM: Linux on System z EBS R12 Certified with Database 11gR2 on SLES 11 11gR2 11.2.0.3 Database Certified with E-Business Suite

    Read the article

  • Oracle E-Business Suite (WebADI) integration with Oracle Open Office

    - by Harald Behnke
    Another highlight of the new Oracle Open Office Release 3.3 enterprise features is the Oracle E-Business Suite Release 12.1 (WebADI) integration. The WebADI integration in Oracle Open Office for Windows allows you to bring your Oracle E-Business Suite data into an Oracle Open Office Calc spreadsheet, where familiar data entry and modeling techniques can be used to complete your E-Business Suite tasks. You can create formatted spreadsheets on your desktop that allow you to download, view, edit, and create Oracle E-Business Suite data. Use data entry shortcuts (such as copying and pasting or dragging and dropping ranges of cells), or Calc's Open Document Format (ODF V1.2) compliant spreadsheet formulas, to calculate amounts to save time. You can combine speed and accuracy by invoking lists of values for fields within the spreadsheet. After editing the spreadsheet, you can use WebADI's validation functionality to validate the data before uploading it to the Oracle E-Business Suite. Validation messages are returned to the spreadsheet, allowing you to identify and correct invalid. This video shows a hands-on demonstration of the Oracle E-Business Suite integration: Read more about the Oracle Open Office enterprise features.

    Read the article

  • Configurable Objects - Introduction

    - by Anthony Shorten
    One of the interesting facilities in the framework is Configurable Object functionality (it is also known as Task Optimization and also known as Cool Tools). The idea is that any implementation can create their own views of the base product objects and services and implement functionality against those new views. For example, in Oracle Utilities Customer Care and Billing, there is a Person object. That object is used to store and manage information about individuals as well as companies. In the base product you would use the Person Maintenance screen and fill in some of the screen when you wanted to register or maintain and individual as well and fill out other parts of the screen when you wanted to register or maintain a company. This can be somewhat confusing to some customers. Using Configurable Objects this can be simplified. A business object can be created that is a view of the any object. For example, you could create a Human business object which would cover the aspects of the Person object pertaining to an individual and a Company business object to cover the aspects unique to a company. Even the tag names (i.e. Field Names) in the object can be changed to be more what the implementation is familiar with. The object can also restructure the object. For example, a common identifier for an individual in the USA is the Social Security number, this value is a Person Identifier (as this varies in each country). In the new Human object you can remap the Person Identifier as a Social Security number. To define a Business Object you use a schema editor built into the browser user interface and use a mapping language to setup the business objects. An example of the language is shown below in an extract of the schema for the Human business object. As you can see there are mapping as well as formatting and other tags. This information can be built manually or using a wizard which generates the base structure for you to alter. This is all stored as meta data when saved. Once a Business object is built it can be used as basis for code, other business objects (we support inheritance), called by a screen (called a UI Map) or even as a Web Service. This is just a start with Configurable Objects as you can also create views of base services called Business Services, Service Scripts used for non-object or complex object processing (as well as other things), UI Maps used for screens and Data Areas to reuse definitions across multiple objects. Configurable Objects are powerful and I only really touched on them here. Over the next few months I hope to add lots more entries about them.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >