Search Results

Search found 41135 results on 1646 pages for 'non relational database'.

Page 600/1646 | < Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >

  • Tired of Downloading Your Entire Schema? How About a Partial Download?

    - by user702295
    Proceed to document 1448266.1, Demantra Non Invasive Partial Schema Export Utility Extracting Partial Schema Dumps for Analysis.  In this document you will find instructions and 2 SQL scripts.  Give it a try.  If you have a problem, feel free to post to this BLOG or the community located at: https://communities.oracle.com/portal/server.pt/community/demantra_solutions/231 Regards!   Jeff

    Read the article

  • JQuery / JSON + .Net Service Layer - to WCF or Not to WCF?

    - by hanzolo
    I Recently had a discussion with a colleague of mine about the pros / cons of WCF. He mentioned about how much code is generated to support WCF, and also the overhead required. It was mentioned that a simple jQuery /Ajax post to a .aspx page (or a handler for that matter) that returns JSON would work more efficiently and takes much less code to implement. I am also aware of the new WCF Web API and feel that technology may solve the "bloated"-ness required in attaining a proxy etc... by just outputting JSON. So when developing a relational DB (MSSQL) storage model, with a fairly complex Business Layer (C#) and Data Access Layer (EntityFW).. what's a good technology for creating a "service layer" which will spit out View Models represented in JSON, with a CQRS(Command Query..) approach in mind.. The app would use the service layer to support it's required UI, as well as provide an available subset of services (outputting JSON data) for service subscribers.. In other words an admin panel to support the admin UI, and service endpoints that return JSON to access the configurations made from the administration UI. What are some potential technologies to use as the transport / communication layer. I'd like to use a pure RESTful approach, but am not against doing some URL rewriting with IIS. Obviously some of the available technologies are: WCF WCF Web API (should this even be separate?) Straight request / response (query string to .aspx / handler) Would using MVC .Net solve this entire problem? maybe their single page app approach? any suggestions / feedback from developing this type of application? Thanks,

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Smart Grid Gateway and New Meter Data Management released

    - by Anthony Shorten
    Two products have just been released and are available from edlivery.oracle.com. Smart Grid Gateway 2.0.0 - A new product to integrate to Smart Grid networks Meter Data Management 2.0.1 - A new version of the Meter Data Management product. These products are the first products to use the brand new version of the Oracle Utilities Applicaton Framework (V4.1). The new framework builds up on FW2.2 and FW4.0.2 to add exciting new features (this is just a subset): Support for Database Vault Enhancements to Business Object Maintenance Batch Statistics Portal for benchmarking Custom template user exit support File permissions now consistent with other Oracle products Use of Universal Connection Pool for all database pool access Ability to manage the batch data cache Over the next few weeks I will be publishing articles and updates to existing whitepapers to highlight all the new features.

    Read the article

  • Nagios: Central Monitoring

    <b>Begin Linux:</b> "This is part two of a three part series on distributed monitoring. You can usehttp://newsadmin.linuxtoday.com/ passive service and host checks to allow non-central Nagios servers to collect data from a network of machines and then transfer that information to a central Nagios server."

    Read the article

  • Choosing between PHP and Java

    - by user996459
    I've recently started University, studying Computing and IT. My Uni focuses on Java. My study will consist of mathematics, 'boring' IT related stuff and several Java units such as: -Software development with Java, -Object-oriented Java programming, -Relational databases: theory and practice (using Java), -Developing concurrent distributed systems (using Java), -Software engineering with objects (using Java). I'm trying to determine whenever I should focus on Java and self study it in my free time so that I can actually learn and become a competent Java programmer by the time I graduate, or, only do enough Java to get the degree but in my free time self study PHP and related web technologies. Job market in my area appears to be balanced for the two, salary and availability wise. Regardless of which patch I'd take getting a job should not be a problem however Java does seem to pay almost insignificantly more. In terms of my interest and career expectations, I don't have anything specific planned. I very much enjoy writing code but I don't really care what kind. So far I equally enjoyed writing C, AutoIT, vb.net, PHP and even Java. Basically, I'm happy as long as I get to type in code (be it low level programming or web back-end scripting). So the question really is, would my Uni and their Java focus profit me should I choose PHP? Or should I buy what my university is selling and stick to Java like a fly sticks to poop...? Apologies for cryptic writing, still learning English

    Read the article

  • Card deck and sparse matrix interview questions

    - by MrDatabase
    I just had a technical phone screen w/ a start-up. Here's the technical questions I was asked ... and my answers. What do think of these answers? Feel free to post better answers :-) Question 1: how would you represent a standard 52 card deck in (basically any language)? How would you shuffle the deck? Answer: use an array containing a "Card" struct or class. Each instance of card has some unique identifier... either it's position in the array or a unique integer member variable in the range [0, 51]. Shuffle the cards by traversing the array once from index zero to index 51. Randomly swap ith card with "another card" (I didn't remember how this shuffle algorithm works exactly). Watch out for using the same probability for each card... that's a gotcha in this algorithm. I mentioned the algorithm is from Programming Pearls. Question 2: how to represent a large sparse matrix? the matrix can be very large... like 1000x1000... but only a relatively small number (~20) of the entries are non-zero. Answer: condense the array into a list of the non-zero entries. for a given entry (i,j) in the array... "map" (i,j) to a single integer k... then use k as a key into a dictionary or hashtable. For the 1000x1000 sparse array map (i,j) to k using something like f(i, j) = i + j * 1001. 1001 is just one plus the maximum of all i and j. I didn't recall exactly how this mapping worked... but the interviewer got the idea (I think). Are these good answers? I'm wondering because after I finished the second question the interviewer said the dreaded "well that's all the questions I have for now." Cheers!

    Read the article

  • Epsilon : An Oracle Customer Profile

    - by Anand Akela
    ZDNet published an article today based on the interview of Jeff White, vice president, technology, strategic database services at Epsilon. Jeff discussed Oracle Exadata Database Machine and Oracle Enterprise Manager with the ZDNet writer Dan Kusnetzky . Read the article  Epsilon : An Oracle Customer Profile . Jeff White, Epsilon VP, was honored with Oracle’s Data Warehouse Leader of the Year for Innovative Data Warehouse Deployment of Oracle Exadata and Oracle Enterprise Manager earlier this year. In one of the videos earlier this year, Jeff mentioned that Epsilon has streamlined IT administration, monitoring, and engineered systems maintenance with Oracle Enterprise Manager. Having gained in operational efficiencies, Epsilon is now providing greater efficiencies to its customers. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Unable to access other Volume in Vaio E series

    - by Rahul Ravi Kumar Shah
    Error mounting /dev/sda6 at /media/ravi/New Volume: mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sda6" "/media/ravi/New Volume" Exited with: non-zero exit status 14: The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Failed to mount '/dev/sda6': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option.

    Read the article

  • How can I prevent spam on sites which I control?

    - by danlefree
    This is a general, community wiki question to address all non-specific spam prevention questions. If your question was closed as a duplicate of this question and you feel that the information provided here does not provide a sufficient answer, please open a discussion on Pro Webmasters Meta. For purposes of this question, spam will include: Any automated post Manually-posted content which includes links to spammers' sites Manually-posted content which includes instructions to visit a spammer's site

    Read the article

  • Building a distributed system on Amazon Web Services

    - by Songo
    Would simply using AWS to build an application make this application a distributed system? For example if someone uses RDS for the database server, EC2 for the application itself and S3 for hosting user uploaded media, does that make it a distributed system? If not, then what should it be called and what is this application lacking for it to be distributed? Update Here is my take on the application to clarify my approach to building the system: The application I'm building is a social game for Facebook. I developed the application locally on a LAMP stack using Symfony2. For production I used an a single EC2 Micro instance for hosting the app itself, RDS for hosting my database, S3 for the user uploaded files and CloudFront for hosting static content. I know this may sound like a naive approach, so don't be shy to express your ideas.

    Read the article

  • What is a business problem?

    - by Juha Untinen
    Can you give an example or two of what a business problem is? I hear this word thrown around a lot, but even after searching, I cannot find a clear answer. Especially when it comes to software development. Are any of these classified as a business problem? Convert invoice from format A to format B Send data from Company A database to Company B database Create a program for time reporting Make the financial software retrieve data faster Generate a weekly data transfer volume statistic Or is a business problem something else?

    Read the article

  • Internship in License Contract Management

    - by cristian.condurache(at)oracle.com
    Hi Everyone, My name is Luca. I am an intern in the License Contract Management team in Italy. I have studied Economics and Business in Pescara and finished my Master’s Degree in July 2009. After a short work experience near my home town I decided to look for a job in an International Company. I got in touch with Oracle in January 2010. I had a telephone interview and then a face-to-face interview. On a cold and grey morning, I arrived in Milan....my first impression was fantastic....a big modern building with wide TVs everywhere. I was a little nervous but very excited. I understood this could be a great opportunity... The interview went well and I started to work in March. After a training period I was quickly involved in the closing of the last quarter of the fiscal year - of which May is the last month at Oracle. Working as a License Contract Manager is a real challenge for a fresh graduate. It involves thoroughly understanding the Oracle Policies and Practices with regards to License Contracts. In my experience, especially in May, I learnt to work under high pressure, within time constrains, and to keep up with constant changes. In this period I also had the opportunity to be involved in different negotiations, being directly in contact with the customers. This helped me to develop my relational skills during complex transactions. Looking back at the nine months at Oracle I can say I have a better understanding of the IT world. It is a complex environment that changes continously, offering new challenges to learn from everytime. If you have any questions related to this article feel free to contact [email protected]. You can find our job opportunities via http://campus.oracle.com. Technorati Tags: License Contract Management,oppotunity,Oracle Policies,internship

    Read the article

  • The Calmest IT Guy in the World

    - by Markus Weber
    Unplanned outages and downtime still result not only in major productivity losses, but also major financial losses. Along the same lines, if zero planned downtime and zero data loss are key to your IT environment and your business requirements, planning for those becomes very important, all while balancing between performance, high availability, and cost. Oracle Database High Availability technologies will help you achieve these mission-critical goals, and are reflected in Oracle's best practices offerings of the Maximum Availability Architecture, or MAA. We created three neat, short videos showcasing some typical use cases, and highlighting three important components (amongst many more) of MAA: Oracle Real Application Clusters (RAC) Oracle Active Data Guard Oracle Flashback Technologies Make sure to watch those videos here, and learn about challenges, and solutions, around High Availability database environments from a recent Independent Oracle Users Group (IOUG) survey. 

    Read the article

  • CodePlex Daily Summary for Monday, June 25, 2012

    CodePlex Daily Summary for Monday, June 25, 2012Popular ReleasesUmbraco CMS: Umbraco CMS 5.2: Development on Umbraco v5 discontinued After much discussion and consultation with leaders from the Umbraco community it was decided that work on the v5 branch would be discontinued with efforts being refocused on the stable and feature rich v4 branch. For full details as to why this decision was made please watch the CodeGarden 12 Keynote. What about all that hard work?!?? We are not binning everything and it does not mean that all work done on 5 is lost! we are taking all of the best and m...IIS Express Manager: IIS Express 0.31 B: V0.1B - 04 May, 2012 Initiated Project. V0.2B - 05May, 2012 1. Fixed small bug. Threw error when stop button was pressed in an already stopped application. 2. Removed start and stop button. Double clicking on list items will now stop / start the websites. 3. Improved code readability. 4. Changed Orientation of Buttons in UI. V0.3B - 06May, 2012 1. Complete modification of IISEM and process ID handling 2. IISEM is now capable of reflecting the existing IISExpress processes right from startup...SPMegaMenu 0.2.0.a: SPMegaMenu 0.2.0.a: SPMegaMenu 0.2.0.a - *Refined the menu to allow for sub category additions. *Release 0.1.0.a did not allow for sub categories. *Also added a Javascript Array Prototype to facilitate removal of duplicates in the sub category return. (the prototype is added as a workaround as I am not able to get the CAML GroupBy function to work correctly against a lookup column)CodeGenerate: CodeGenerate Alpha: The Project can auto generate C# code. Include BLL Layer、Domain Layer、IDAL Layer、DAL Layer. Support SqlServer And Oracle This is a alpha program,but which can run and generate code. Generate database table info into MS WordXDA ROM HUB: XDA ROM HUB v0.9: Kernel listing added -- Thanks to iONEx Added scripts installer button. Added "Nandroid On The Go" -- Perform a Nandroid backup without a PC! Added official Android app!ExtAspNet: ExtAspNet v3.1.8.1: +2012-06-24 v3.1.8 +????Grid???????(???????ExpandUnusedSpace????????)(??)。 -????MinColumnWidth(??????)。 -????AutoExpandColumn,???????????????(ColumnID)(?????ForceFitFirstTime??ForceFitAllTime,??????)。 -????AutoExpandColumnMax?AutoExpandColumnMin。 -????ForceFitFirstTime,????????????,??????????(????????????)。 -????ForceFitAllTime,????????????,??????????(??????????????????)。 -????VerticalScrollWidth,????????(??????????,0?????????????)。 -????grid/grid_forcefit.aspx。 -???????????En...AJAX Control Toolkit: June 2012 Release: AJAX Control Toolkit Release Notes - June 2012 Release Version 60623June 2012 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found here: 11121. - Pages using ...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.5: Version: 2.5.0.5 (Milestone 5): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Add IsInDesignMode property to the WafConfiguration class. WAF: Introduce the IModuleController interface. WAF: Add ...Windows 8 Metro RSS Reader: Metro RSS Reader.v7: Updated for Windows 8 Release Preview Changed background and foreground colors Used VariableSizeGrid layout to wrap blog posts with images Sort items with Images first, text-only last Enabled Caching to improve navigation between framesConfuser: Confuser 1.9: Change log: * Stable output (i.e. given the same seed & input assemblies, you'll get the same output assemblies) + Generate debug symbols, now it is possible to debug the output under a debugger! (Of course without enabling anti debug) + Generating obfuscation database, most of the obfuscation data in stored in it. + Two tools utilizing the obfuscation database (Database viewer & Stack trace decoder) * Change the protection scheme -----Please read Bug Report before you report a bug-----...XDesigner.Development: First release: First releaseBlackJumboDog: Ver5.6.5: 2012.06.22 Ver5.6.5  (1) FTP??????? EPSV ?? EPRT ???????DotNetNuke® Form and List: 06.00.01: DotNetNuke Form and List 06.00.01 Changes in 06.00.01 Icons are shown in module action buttons (workaraound to core issue with IconAPI) Fix to Token2XSL Editor, changing List type raised exception MakeTumbnail and ShowXml handlers had been missing in install package Updated help texts for better understanding of filter statement, token support in email subject and css style Major changes for fnL 6.0: DNN 6 Form Patterns including modal PopUps and Tabs http://www.dotnetnuke.com/Po...MVVM Light Toolkit: V4RTM (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RTM. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibs An installer with binaries, snippets and templates will follow ASAP.Weapsy - ASP.NET MVC CMS: 1.0.0: - Some changes to Layout and CSS - Changed version number to 1.0.0.0 - Solved Cache and Session items handler error in IIS 7 - Created the Modules, Plugins and Widgets Areas - Replaced CKEditor with TinyMCE - Created the System Info page - Minor changesAcDown????? - AcDown Downloader Framework: AcDown????? v3.11.7: ?? ●AcDown??????????、??、??????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ...NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.3 - VS2010 + VS2012: This is a small maintenance release to support new VS2012 as well as VS2010. This release is also fixing the issue The "Comment Selection" include the first line after the selection If the new NShader version doesn't highlight your shader, you can try to: Remove the registry entry: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\FontAndColors\Cache Remove all lines using "fx" or "hlsl" in file C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Micr...3D Landmark Recognition: Landmark3D Dataset V1.0 (7z 1 of 3): Landmark3D Dataset Version 1.0Xenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0: System Requirements OS Windows 7 Windows Vista Windows Server 2008 Windows Server 2008 R2 Web Server Internet Information Service 7.0 or above .NET Framework .NET Framework 4.0 WCF Activation feature HTTP Activation Non-HTTP Activation for net.pipe/net.tcp WCF bindings ASP.NET MVC ASP.NET MVC 3.0 Database Microsoft SQL Server 2005 Microsoft SQL Server 2008 Microsoft SQL Server 2008 R2 Additional Deployment Configuration Started Windows Process Activation service Start...MFCMAPI: June 2012 Release: Build: 15.0.0.1034 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeNew ProjectsAudio-Gallery-Suite: Audio-Gallery-Suite is a complete audio gallery solution that includes a web audio gallery and a windows application for its complete management.caoliu_????????-????: ????????????Decanini: Teste de SummaryDevToolkit: placeholderdotCommand: dotCommand project explores various .NET implementations of Command design pattern, by providing and measuring those implementations.Engine Wars Chess Manager Service: EngineWars is Client/Server framework to host chess matches of humans vs humans, humans vs Chess Engines, and engines vs engines in casual play and tournaments.FsHtml: A tiny HTML parser in F#JAVALINKDotHandler: BÁSICO Sistema para avaliação de A5LP1 - IFSP - Campi SÃO PAULOKylinORM ????: KylinORM???????,??AOP?DDD???????????。LegendJS 2D: A 2D engine written in Javascript and Canvas. Aims to be easy learn and flexible enough to develop complex 2D game or other 2D application. LegendJS manages youMetroToolbox: A library of useful tools for developer metro apps on windows rt.M-i-c-r-o-S-o-f-t-W-M-S: M8i8c8r8o8S8o8f8t M8i8c8r8o8S8o8f8t M8i8c8r8o8S8o8f8tMineSweeping: 1234567890Multiplatform GTK Docking Panel in MONO: This is an open source project to develop a multiplatform dock-panel library in Gtk for Mono.MyAbcdefg_abcdefg: MyAbcdefg_abcdefgPlugin PagSeguro for NopCommerce 2.5: PagSeguro paymento module for nopcommerce 2.5 This is a brazilian payment method.Reckoning: Reckoning is a word counter that reports the frequencies of words in a piece of text. Seo Proxy: Seo Proxy is a proxy fetcher and checker. Project is intended to create ultimate tool to get proxies from websites and build quality lists.SharePoint Auriga blog: -SharpToolkit: placeholderShift2D: Shift2D????XNA?? Shift2D??????WPF、Silverlight????,????????,???????,?????????。Simple TFTP loader for Windows Embedded Compact EBOOT: Utility to download NK.BIN files to EBOOT without using PLatform Builder'SUMC Reader: SUMC reader - read info from http://m.sumc.bg I named SUMC, because that's the official nickname of Sofia public transport. t: tVertical Shooter RPG: Vertical Shooter RPG is an XNA open source game that will be an update to the vertical shooter genre by adding a world map and upgradable options.w___m___s___uupiwqewrqewypouyupqurpe: sdfasdfadsfafadsfsadfworkmanagesystem: This is the Course Project for School of E&M NCEPU owned by OLDBIG12. Now OLDBIG12 and Silent are develeoping it. It's just my first project on CodePlex.

    Read the article

  • Firefox Slow down and 100% CPU after gnome-settings-deamon update

    - by digitaljail
    I'm on Ubuntu 12.04 (Unity) with Firefox 17.0.1 instaled. after the latest update of the gnome-settings-deamon (3.4.2-0ubuntu0.5,3.4.2-0ubuntu0.6) FireFox starts taking 100% of my CPU, periodically. I have tried various things: 1) Disabled All the non standard extensions = No change to the CPU Usage 2) Disabled Flash PlugIns (also updated same time) = No change to CPU Usage 3) Disabled "Global Menu integration 3.6.4) Extension = HOOOA CPU OK !!! Any suggestion to get back global menu integration with no more CPU problems?

    Read the article

  • Using lookahead assertions in regular expressions

    - by Greg Jackson
    I use regular expressions on a daily basis, as my daily work is 90% in Perl (legacy codebase, but that's a different issue). Despite this, I still find lookahead and lookbehind to be terribly confusing and often unreadable. Right now, if I were to get a code review with a lookahead or lookbehind, I would immediately send it back to see if the problem can be solved by using multiple regular expressions or a different approach. The following are the main reasons I tend not to like them: They can be terribly unreadable. Lookahead assertions, for example, start from the beginning of the string no matter where they are placed. That, among other things, can cause some very "interesting" and non-obvious behaviors. It used to be the case that many languages didn't support lookahead/lookbehind (or supported them as "experimental features"). This isn't the case quite as much, but there's still always the question as to how well it's supported. Quite frankly, they feel like a dirty hack. Regexps often already are, but they can also be quite elegant, and have gained widespread acceptance. I've gotten by without any need for them at all... sometimes I think that they're extraneous. Now, I'll freely admit that especially the last two reasons aren't really good ones, but I felt that I should enumerate what goes through my mind when I see one. I'm more than willing to change my mind about them, but I feel that they violate some of my core tenets of programming, including: Code should be as readable as possible without sacrificing functionality -- this may include doing something in a less efficient, but clearer was as long as the difference is negligible or unimportant to the application as a whole. Code should be maintainable -- if another programmer comes along to fix my code, non-obvious behavior can hide bugs or make functional code appear buggy (see readability) "The right tool for the right job" -- I'm sure you can come up with contrived examples that could use lookahead, but I've never come across something that really needs them in my real-world development work. Is there anything that they're really the best tool for, as opposed to, say, multiple regexps (or, alternatively, are they the best tool for most cases they're used for today). My question is this: Is it good practice to use lookahead/lookbehind in regular expressions, or are they simply a hack that have found their way into modern production code? I'd be perfectly happy to be convinced that I'm wrong about this, and simple examples are useful for examples or illustration, but by themselves, won't be enough to convince me.

    Read the article

  • Dites-moi quelle est votre adresse e-mail, je vous dirai votre niveau de connaissances en informatiq

    Dites-moi quelle est votre adresse e-mail, je vous dirai votre niveau de connaissances en informatique Que l'on soit ou non client chez un FAI, nous possédons presque tous au moins une adresse e-mail. Le blog satyrique The Oatmeal s'est ainsi lancé dans l'analyse de la personnalité d'un internaute en fonction de l'adresse e-mail qu'il s'est crée. Voici les traits de caractère supposés des propriétaires de boîtes à lettres électroniques : Si vous utilisez : - Votre propre domaine (par exemple [email protected] ou [email protected]) : Il y a de grandes chances que vous soyez très doué en informatique et comp...

    Read the article

  • Will bing bot index pages with invalid SSL certificates?

    - by Martin
    Bingbot and Yahoo slurp do not support SNI(Server Name Indication when using SSL). Ignoring other workarounds (multi domain certificates, non-SSL content etc.), will Bingbot index pages that have an invalid SSL certificate, eg. issued for example.net, but used on example.com? If possible please provide an example from Yahoo or Bing. I have found websites in bing, that use self signed certificates and are indexed correctly, but what about invalid certificates?

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • How can I make the Ubuntu Software Center load?

    - by Kieran
    I launch it and it goes grey for almost immediately. Closing it prompts me to "force close" and no error report is given. I launched it in Terminal and this was the resulting log: 2012-11-23 22:39:25,175 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-11-23 22:39:25,179 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-11-23 22:39:25,409 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-11-23 22:39:25,412 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/gi/importer.py', 51, 'find_module')' 2012-11-23 22:39:25,412 - root - ERROR - Could not find any typelib for LaunchpadIntegration 2012-11-23 22:39:25,474 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None

    Read the article

  • What are the benefits of PHP?

    - by acme
    Everybody knows that people that have prejudices against certain programming languages. Especially PHP seems to suffer from problems of its past and some other things (like loose types) and is often called a non-serious programming language that should not be used for professional applications. In that special case PHP: How do you argue using PHP as your chosen programming language for web applications? What are the benefits, where is PHP better than ColdFusion, Java, etc.?

    Read the article

  • Key ATG architecture principles

    - by Glen Borkowski
    Overview The purpose of this article is to describe some of the important foundational concepts of ATG.  This is not intended to cover all areas of the ATG platform, just the most important subset - the ones that allow ATG to be extremely flexible, configurable, high performance, etc.  For more information on these topics, please see the online product manuals. Modules The first concept is called the 'ATG Module'.  Simply put, you can think of modules as the building blocks for ATG applications.  The ATG development team builds the out of the box product using modules (these are the 'out of the box' modules).  Then, when a customer is implementing their site, they build their own modules that sit 'on top' of the out of the box ATG modules.  Modules can be very simple - containing minimal definition, and perhaps a small amount of configuration.  Alternatively, a module can be rather complex - containing custom logic, database schema definitions, configuration, one or more web applications, etc.  Modules generally will have dependencies on other modules (the modules beneath it).  For example, the Commerce Reference Store module (CRS) requires the DCS (out of the box commerce) module. Modules have a ton of value because they provide a way to decouple a customers implementation from the out of the box ATG modules.  This allows for a much easier job when it comes time to upgrade the ATG platform.  Modules are also a very useful way to group functionality into a single package which can be leveraged across multiple ATG applications. One very important thing to understand about modules, or more accurately, ATG as a whole, is that when you start ATG, you tell it what module(s) you want to start.  One of the first things ATG does is to look through all the modules you specified, and for each one, determine a list of modules that are also required to start (based on each modules dependencies).  Once this final, ordered list is determined, ATG continues to boot up.  One of the outputs from the ordered list of modules is that each module can contain it's own classes and configuration.  During boot, the ordered list of modules drives the unified classpath and configpath.  This is what determines which classes override others, and which configuration overrides other configuration.  Think of it as a layered approach. The structure of a module is well defined.  It simply looks like a folder in a filesystem that has certain other folders and files within it.  Here is a list of items that can appear in a module: MyModule: META-INF - this is required, along with a file called MANIFEST.MF which describes certain properties of the module.  One important property is what other modules this module depends on. config - this is typically present in most modules.  It defines a tree structure (folders containing properties files, XML, etc) that maps to ATG components (these are described below). lib - this contains the classes (typically in jarred format) for any code defined in this module j2ee - this is where any web-apps would be stored. src - in case you want to include the source code for this module, it's standard practice to put it here sql - if your module requires any additions to the database schema, you should place that schema here Here's a screenshots of a module: Modules can also contain sub-modules.  A dot-notation is used when referring to these sub-modules (i.e. MyModule.Versioned, where Versioned is a sub-module of MyModule). Finally, it is important to completely understand how modules work if you are going to be able to leverage them effectively.  There are many different ways to design modules you want to create, some approaches are better than others, especially if you plan to share functionality between multiple different ATG applications. Components A component in ATG can be thought of as a single item that performs a certain set of related tasks.  An example could be a ProductViews component - used to store information about what products the current customer has viewed.  Components have properties (also called attributes).  The ProductViews component could have properties like lastProductViewed (stores the ID of the last product viewed) or productViewList (stores the ID's of products viewed in order of their being viewed).  The previous examples of component properties would typically also offer get and set methods used to retrieve and store the property values.  Components typically will also offer other types of useful methods aside from get and set.  In the ProductViewed component, we might want to offer a hasViewed method which will tell you if the customer has viewed a certain product or not. Components are organized in a tree like hierarchy called 'nucleus'.  Nucleus is used to locate and instantiate ATG Components.  So, when you create a new ATG component, it will be able to be found 'within' nucleus.  Nucleus allows ATG components to reference one another - this is how components are strung together to perform meaningful work.  It's also a mechanism to prevent redundant configuration - define it once and refer to it from everywhere. Here is a screenshot of a component in nucleus:  Components can be extremely simple (i.e. a single property with a get method), or can be rather complex offering many properties and methods.  To be an ATG component, a few things are required: a class - you can reference an existing out of the box class or you could write your own a properties file - this is used to define your component the above items must be located 'within' nucleus by placing them in the correct spot in your module's config folder Within the properties file, you will need to point to the class you want to use: $class=com.mycompany.myclass You may also want to define the scope of the class (request, session, or global): $scope=session In summary, ATG Components live in nucleus, generally have links to other components, and provide some meaningful type of work.  You can configure components as well as extend their functionality by writing code. Repositories Repositories (a.k.a. Data Anywhere Architecture) is the mechanism that ATG uses to access data primarily stored in relational databases, but also LDAP or other backend systems.  ATG applications are required to be very high performance, and data access is critical in that if not handled properly, it could create a bottleneck.  ATG's repository functionality has been around for a long time - it's proven to be extremely scalable.  Developers new to ATG need to understand how repositories work as this is a critical aspect of the ATG architecture.   Repositories essentially map relational tables to objects in ATG, as well as handle caching.  ATG defines many repositories out of the box (i.e. user profile, catalog, orders, etc), and this is comprised of both the underlying database schema along with the associated repository definition files (XML).  It is fully expected that implementations will extend / change the out of the box repository definitions, so there is a prescribed approach to doing this.  The first thing to be sure of is to encapsulate your repository definition additions / changes within your own module (as described above).  The other important best practice is to never modify the out of the box schema - in other words, don't add columns to existing ATG tables, just create your own new tables.  These will help ensure you can easily upgrade your application at a later date. xml-combination As mentioned earlier, when you start ATG, the order of the modules will determine the final configpath.  Files within this configpath are 'layered' such that modules on top can override configuration of modules below it.  This is the same concept for repository definition files.  If you want to add a few properties to the out of the box user profile, you simply need to create an XML file containing only your additions, and place it in the correct location in your module.  At boot time, your definition will be combined (hence the term xml-combination) with the lower, out of the box modules, with the result being a user profile that contains everything (out of the box, plus your additions).  Aside from just adding properties, there are also ways to remove and change properties. types of properties Aside from the normal 'database backed' properties, there are a few other interesting types: transient properties - these are properties that are in memory, but not backed by any database column.  These are useful for temporary storage. java-backed properties - by nature, these are transient, but in addition, when you access this property (by called the get method) instead of looking up a piece of data, it performs some logic and returns the results.  'Age' is a good example - if you're storing a birth date on the profile, but your business rules are defined in terms of someones age, you could create a simple java-backed property to look at the birth date and compare it to the current date, and return the persons age. derived properties - this is what allows for inheritance within the repository structure.  You could define a property at the category level, and have the product inherit it's value as well as override it.  This is useful for setting defaults, with the ability to override. caching There are a number of different caching modes which are useful at different times depending on the nature of the data being cached.  For example, the simple cache mode is useful for things like user profiles.  This is because the user profile will typically only be used on a single instance of ATG at one time.  Simple cache mode is also useful for read-only types of data such as the product catalog.  Locked cache mode is useful when you need to ensure that only one ATG instance writes to a particular item at a time - an example would be a customers order.  There are many options in terms of configuring caching which are outside the scope of this article - please refer to the product manuals for more details. Other important concepts - out of scope for this article There are a whole host of concepts that are very important pieces to the ATG platform, but are out of scope for this article.  Here's a brief description of some of them: formhandlers - these are ATG components that handle form submissions by users. pipelines - these are configurable chains of logic that are used for things like handling a request (request pipeline) or checking out an order. special kinds of repositories (versioned, files, secure, ...) - there are a couple different types of repositories that are used in various situations.  See the manuals for more information. web development - JSP/ DSP tag library - ATG provides a traditional approach to developing web applications by providing a tag library called the DSP library.  This library is used throughout your JSP pages to interact with all the ATG components. messaging - a message sub-system used as another way for components to interact. personalization - ability for business users to define a personalized user experience for customers.  See the other blog posts related to personalization.

    Read the article

< Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >