Search Results

Search found 12824 results on 513 pages for 'glen little'.

Page 9/513 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Create Word/RTF file with table of contents from Java

    - by Glen
    Hi, I want to create a Word or RTF file with a table of contents (with links to each section) from Java. From my understanding, iText & Apache POI do not support generating a table of contents. Some clients of the app still use older versions of Word, so I need a library that supports the older Word doc format. Does anyone know how I can do this? Thanks, Glen

    Read the article

  • Tuxedo Load Balancing

    - by Todd Little
    A question I often receive is how does Tuxedo perform load balancing.  This is often asked by customers that see an imbalance in the number of requests handled by servers offering a specific service. First of all let me say that Tuxedo really does load or request optimization instead of load balancing.  What I mean by that is that Tuxedo doesn't attempt to ensure that all servers offering a specific service get the same number of requests, but instead attempts to ensure that requests are processed in the least amount of time.   Simple round robin "load balancing" can be employed to ensure that all servers for a particular service are given the same number of requests.  But the question I ask is, "to what benefit"?  Instead Tuxedo scans the queues (which may or may not correspond to servers based upon SSSQ - Single Server Single Queue or MSSQ - Multiple Server Single Queue) to determine on which queue a request should be placed.  The scan is always performed in the same order and during the scan if a queue is empty the request is immediately placed on that queue and request routing is done.  However, should all the queues be busy, meaning that requests are currently being processed, Tuxedo chooses the queue with the least amount of "work" queued to it where work is the sum of all the requests queued weighted by their "load" value as defined in the UBBCONFIG file.  What this means is that under light loads, only the first few queues (servers) process all the requests as an empty queue is often found before reaching the end of the scan.  Thus the first few servers in the queue handle most of the requests.  While this sounds non-optimal, in fact it capitalizes on the underlying operating systems and hardware behavior to produce the best possible performance.  Round Robin scheduling would spread the requests across all the available servers and thus require all of them to be in memory, and likely not share much in the way of hardware or memory caches.  Tuxedo's system maximizes the various caches and thus optimizes overall performance.  Hopefully this makes sense and now explains why you may see a few servers handling most of the requests.  Under heavy load, meaning enough load to keep all servers that can handle a request busy, you should see a relatively equal number of requests processed.  Next post I'll try and cover how this applies to servers in a clustered (MP) environment because the load balancing there is a little more complicated. Regards,Todd LittleOracle Tuxedo Chief Architect

    Read the article

  • Run On Sentences in Technical Writing

    - by Sean Noodleson Neilan
    This is just a question to think about. When you write technical documentation and programming comments, do you ever find yourself writing run-on sentences in order to be more precise? Is packing more technical information into one sentence better than creating many little sentences each with a little bit of technical information? I know it's better to have lots of little classes in their own little files. Perhaps this doesn't apply to writing?

    Read the article

  • I've got two technical degrees but little in the way of experience. How do I get into programming? [closed]

    - by Neonfirelights
    I'm looking for a job, I want to break into programming. I'm looking for the right sort of role and the right place to look for it; I would really appreciate input from someone with industry experience. I've got an excellent academic record: BSc Physics (2:1), MSc Computer Graphics, Vision and Imaging (expecting Merit) from two world ranking universities. I have advanced technical knowledge of C/C++ and Matlab and experience working with C# and VB.NET. Unfortunately I don't have much in the way of commercial experience; unlike a lot of people I know my under-graduate didn't come with a sandwich placement. Where can I go to break into the software industry?

    Read the article

  • Can I run Excel 2010 on a server?

    - by Glen Little
    This question is not about a person using Excel on a computer that happens to have an Windows Server OS. And it is not about using any Sharepoint services features! The question is about automated processes that use code (Office Automation) to open Excel files, manipulate them, run calculations, read data, save copies of the file and close the files... all in code. In previous versions of Excel the licensing agreement prevented use on a public server, notes from Microsoft warned about the problems trying to use Office Automation in a server environment, and we were warned that Excel was single threaded and not designed for use on a server. Most of the articles about this were written before Office 2010. But now, Excel 2010 is designed to work on a High Performance Computing server using HPC Services for Excel. One HPC document mentions "Windows HPC Server 2008 R2 includes a comprehensive pop-up manager that can handle occasional dialog boxes and pop-up messages". So my question is... is it now "safe" to run code that automates Excel 2010 on a "normal" server without using the HPC services? If not, can the HPC Services for Excel work on a single server? I don't need the high performance, distributed computing, aspect of HPC Services for Excel... just the ability to run Excel on a server. Can that now be done? Thanks, Glen

    Read the article

  • New Tuxedo White Papers

    - by todd.little
    As part of the Tuxedo 11gR1 release, I've written two new white papers on Tuxedo. One is called "Tuxedo in a SOA World" and discusses how Tuxedo fits into SOA based applications. It covers most of the various connectivity options from Tuxedo into SOA environments and gives guidance as to which connectivity options are best suited for a particular application requirement. The other white paper "SCA: Bringing Modern SOA Programing to Tuxedo" is of a more technical bent and focuses on using the SCA features in SALT to easily build SOA based applications on Tuxedo without using a lot of technical APIs. In fact, services built using SALT's SCA support don't require any technical APIs, just pure business logic, and SCA clients need at most a couple of API calls, simply to look up a service. You can find these two new white papers as well as some additional white papers at http://www.oracle.com/technology/products/tuxedo/index.html.

    Read the article

  • Oracle Healthcare Data Warehouse Foundations RELEASED!

    - by Glen McCallum
    Since I joined Oracle I've been working on Oracle Healthcare Data Warehouse Foundations (OHDF). It was officially released earlier this month at HIMSS. But for over 2 months prior to that I had to keep it a secret. It was so tough; I didn't even tell my family when they asked me what I was working on. Anyway, OHDF is an enterprise healthcare data model. Unlike Healthcare Transaction Base, OHDF is in 3rd normal form. It is logical and reasonably easy to understand for anyone with some experience in the healthcare domain. OHDF is emerging as the core of Oracle's healthcare business intelligence applications.

    Read the article

  • Tuxedo 11gR1 Client Server Affinity

    - by todd.little
    One of the major new features in Oracle Tuxedo 11gR1 is the ability to define an affinity between clients and servers. In previous releases of Tuxedo, the only way to ensure that multiple requests from a client went to the same server was to establish a conversation with tpconnect() and then use tpsend() and tprecv(). Although this works it has some drawbacks. First for single-threaded servers, the server is tied up for the entire duration of the conversation and cannot service other clients, an obvious scalability issue. I believe the more significant drawback is that the application programmer has to switch from the simple request/response model provided by tpcall() to the half duplex tpsend() and tprecv() calls used with conversations. Switching between the two typically requires a fair amount of redesign and recoding. The Client Server Affinity feature in Tuxedo 11gR1 allows by way of configuration an application to define affinities that can exist between clients and servers. This is done in the *SERVICES section of the UBBCONFIG file. Using new parameters for services defined in the *SERVICES section, customers can determine when an affinity session is created or deleted, the scope of the affinity, and whether requests can be routed outside the affinity scope. The AFFINITYSCOPE parameter can be MACHINE, GROUP, or SERVER, meaning that while the affinity session is in place, all requests from the client will be routed to the same MACHINE, GROUP, or SERVER. The creation and deletion of affinity is defined by the SESSIONROLE parameter and a service can be defined as either BEGIN, END, or NONE, where BEGIN starts an affinity session, END deletes the affinity session, and NONE does not impact the affinity session. Finally customers can define how strictly they want the affinity scope adhered to using the AFFINITYSTRICT parameter. If set to MANDATORY, all requests made during an affinity session will be routed to a server in the affinity scope. Thus if the affinity scope is SERVER, all subsequent tpcall() requests will be sent to the same server the affinity scope was established with. If the server doesn't offer that service, even though other servers do offer the service, the call will fail with TPNOENT. Setting AFFINITYSTRICT to PRECEDENT tells Tuxedo to try and route the request to a server in the affinity scope, but if that's not possible, then Tuxedo can try to route the request to servers out of scope. All of this begs the question, why? Why have this feature? There many uses for this capability, but the most common is when there is state that is maintained in a server, group of servers, or in a machine and subsequent requests from a client must be routed to where that state is maintained. This might be something as simple as a database cursor maintained by a server on behalf of a client. Alternatively it might be that the server has a connection to an external system and subsequent requests need to go back to the server that has that connection. A more sophisticated case is where a group of servers maintains some sort of cache in shared memory and subsequent requests need to be routed to where the cache is maintained. Although this last case might be able to be handled by data dependent routing, using client server affinity allows the cache to be partitioned dynamically instead of statically.

    Read the article

  • TSAM 11gR1

    - by todd.little
    The Tuxedo System and Application Monitor (TSAM) 11gR1 release provides powerful new application monitoring capabilities, as well as significant improvements in ease of use. The first thing users will notice is the completely redesigned user interface in the TSAM console. Based on Oracle ADF, the console is much easier to navigate, provides a Web 2.0 style interface with dynamically updating panels, and a look and feel familiar to those that have used Oracle Enterprise Manager. Monitoring data can be viewed in both tabular and graphical form and exported to Excel for further analysis. A number of new metrics are collected and displayed in this release. Call path monitoring now displays CPU time, message size, total transport time, and client address giving even more end-to-end information about a specific Tuxedo request. As well the call path display has been completely revamped to make it much easier to see the branches of the call path. The call pattern display now provides statistics on successful vs failed calls, system and application failures, and end-to-end average elapsed time. Service monitoring now displays minimum and maximum message size, CPU usage, and client address. System server monitoring now includes monitoring the SALT gateway servers to provide detailed performance metrics about those servers. Perhaps the most significant new feature is the consolidation of alert definitions and policy management. In previous versions of TSAM, some alerts were defined and checked on the monitored systems while others were defined and checked in the console. Policy management could be performed on both the monitored node via environment variable or command, as well as from the console. Now all alert definitions and policy definitions are only made using the console. For alerts this means that regardless of where the alert is evaluated it is defined in one and only one place. Thus the plug-in alert mechanism of previous releases can now be managed using the TSAM console, making SLA alert definition much easier and cleaner. Finally there is support in TSAM for monitoring rehosted mainframe applications. The newly announced Oracle Tuxedo Application Runtime for CICS and Batch can be monitored in the TSAM console using traditional mainframe views of the application such as regions. Look for a future blog entry with more details on this as well as some entries providing a glimpse of the console. TSAM gives users a single point for monitoring the performance of all of their Tuxedo applications.

    Read the article

  • Transitioning to Transaction Base

    - by Glen McCallum
    I was actually hired at Oracle Health Sciences to work on the HTB application. Long story short, when HL7 version 3 was relatively new ... Canada made an initial sprint at adoption. Since then progress has slowed. I was part of that initial adoption and learned a lot about the Reference Information Model. At that time we worked mostly with CDA R2 Level 3 (fully coded/ structured xml) documents.HTB is a HL7 v3 RIM-based repository. Love it or hate it, the product is unique in the market place. One of the advantages is the flexibility of the model. You can aggregate information from literally any source system without any HTB data model modification and then use that data in a semantically meaningful way. That's extremely powerful.There is a minor speed bump getting up to speed with HL7 v3, there's no doubt about that. I believe that is why Oracle recruited me from Canada originally - so I could have a running start at HTB. In the near future I'm looking forward to an application deep dive with John Hatem.

    Read the article

  • Webcast: The ART of Migrating and Modernizing IBM Mainframe Applications

    - by todd.little
    Tuxedo provides an excellent platform to migrate mainframe applications to distributed systems. As the only distributed transaction processing monitor that offers quality of service comparable or better than mainframe systems, Tuxedo allows customers to migrate their existing mainframe based applications to a platform with a much lower total cost of ownership. Please join us on Thursday April 29 at 10:00am Pacific Time for this exciting webcast covering the new Oracle Tuxedo Application Runtime for CICS and Batch 11g. Find out how easy it is to migrate your CICS and mainframe batch applications to Tuxedo.

    Read the article

  • HDWF [was OHDF]

    - by Glen McCallum
    Acronyms, acronyms ... same name (Oracle Healthcare Data Warehouse Foundation). Now it goes by HDWF. Don't ask me why. HDWF Version 2.0 was released quietly on 12 May. I'm told it is available on eDelivery. I've been spending more time working on HDWF this month. There's no question Oracle is moving at full-steam on this one. I've even spent a few nights this week working on India time with the team over there. We're busy moving Oracle's Operating Room Analytics application onto the new HDWF enterprise healthcare model. It's really been a great illustration of the comprehensiveness of the model. It was easily able accomodate all of the information required by ORA downstream.

    Read the article

  • Design for complex ATG applications

    - by Glen Borkowski
    Overview Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.  The real complex applications have to support multiple sites, multiple languages, multiple catalogs, multiple currencies, a couple different development teams, multiple business teams, and a highly complex business model (and processes to go along with it).  While it's still important to implement a proper design for simple applications, it's absolutely critical to do this for the complex applications.  Why?  It's all about time and money.  If you are unable to manage your complex applications in an efficient manner, the cost of managing it will increase dramatically as will the time to get things done (time to market).  On the positive side, your competition is most likely in the same situation, so you just need to be more efficient than they are. This article is intended to discuss a number of key areas to think about when designing complex applications on ATG.  Some of this can get fairly technical, so it may help to get some background first.  You can get enough of the required background information from this post.  After reading that, come back here and follow along. Application Design Of all the various types of ATG applications out there, the most complex tend to be the ones in the telecommunications industry - especially the ones which operate in multiple countries.  To get started, let's assume that we are talking about an application like that.  One that has these properties: Operates in multiple countries - must support multiple sites, catalogs, languages, and currencies The organization is fairly loosely-coupled - single brand, but different businesses across different countries There is some common functionality across all sites in all countries There is some common functionality across different sites within the same country Sites within a single country may have some unique functionality - relative to other sites in the same country Complex product catalog (mostly in terms of bundles, eligibility, and compatibility) At this point, I'll assume you have read through the required reading and have a decent understanding of how ATG modules work... Code / configuration - assemble into modules When it comes to defining your modules for a complex application, there are a number of goals: Divide functionality between the modules in a way that maps to your business Group common functionality 'further down in the stack of modules' Provide a good balance between shared resources and autonomy for countries / sites Now I'll describe a high level approach to how you could accomplish those goals...  Let's start from the bottom and work our way up.  At the very bottom, you have the modules that ship with ATG - the 'out of the box' stuff.  You want to make sure that you are leveraging all the modules that make sense in order to get the most value from ATG as possible - and less stuff you'll have to write yourself.  On top of the ATG modules, you should create what we'll refer to as the Corporate Foundation Module described as follows: Sits directly on top of ATG modules Used by all applications across all countries and sites - this is the foundation for everyone Contains everything that is common across all countries / all sites Once established and settled, will change less frequently than other 'higher' modules Encapsulates as many enterprise-wide integrations as possible Will provide means of code sharing therefore less development / testing - faster time to market Contains a 'reference' web application (described below) The next layer up could be multiple modules for each country (you could replace this with region if that makes more sense).  We'll define those modules as follows: Sits on top of the corporate foundation module Contains what is unique to all sites in a given country Responsible for managing any resource bundles for this country (to handle multiple languages) Overrides / replaces corporate integration points with any country-specific ones Finally, we will define what should be a fairly 'thin' (in terms of functionality) set of modules for each site as follows: Sits on top of the country it resides in module Contains what is unique for a given site within a given country Will mostly contain configuration, but could also define some unique functionality as well Contains one or more web applications The graphic below should help to indicate how these modules fit together: Web applications As described in the previous section, there are many opportunities for sharing (minimizing costs) as it relates to the code and configuration aspects of ATG modules.  Web applications are also contained within ATG modules, however, sharing web applications can be a bit more difficult because this is what the end customer actually sees, and since each site may have some degree of unique look & feel, sharing becomes more challenging.  One approach that can help is to define a 'reference' web application at the corporate foundation layer to act as a solid starting point for each site.  Here's a description of the 'reference' web application: Contains minimal / sample reference styling as this will mostly be addressed at the site level web app Focus on functionality - ensure that core functionality is revealed via this web application Each individual site can use this as a starting point There may be multiple types of web apps (i.e. B2C, B2B, etc) There are some techniques to share web application assets - i.e. multiple web applications, defined in the web.xml, and it's worth investigating, but is out of scope here. Reference infrastructure In this complex environment, it is assumed that there is not a single infrastructure for all countries and all sites.  It's more likely that different countries (or regions) could have their own solution for infrastructure.  In this case, it will be advantageous to define a reference infrastructure which contains all the hardware and software that make up the core environment.  Specifications and diagrams should be created to outline what this reference infrastructure looks like, as well as it's baseline cost and the incremental cost to scale up with volume.  Having some consistency in terms of infrastructure will save time and money as new countries / sites come online.  Here are some properties of the reference infrastructure: Standardized approach to setup of hardware Type and number of servers Defines application server, operating system, database, etc... - including vendor and specific versions Consistent naming conventions Provides a consistent base of terminology and understanding across environments Defines which ATG services run on which servers Production Staging BCC / Preview Each site can change as required to meet scale requirements Governance / organization It should be no surprise that the complex application we're talking about is backed by an equally complex organization.  One of the more challenging aspects of efficiently managing a series of complex applications is to ensure the proper level of governance and organization.  Here are some ideas and goals to work towards: Establish a committee to make enterprise-wide decisions that affect all sites Representation should be evenly distributed Should have a clear communication procedure Focus on high level business goals Evaluation of feature / function gaps and how that relates to ATG release schedule / roadmap Determine when to upgrade & ensure value will be realized Determine how to manage various levels of modules Who is responsible for maintaining corporate / country / site layers Determine a procedure for controlling what goes in the corporate foundation module Standardize on source code control, database, hardware, OS versions, J2EE app servers, development procedures, etc only use tested / proven versions - this is something that should be centralized so that every country / site does not have to worry about compatibility between versions Create a innovation team Quickly develop new features, perform proof of concepts All teams can benefit from their findings Summary At this point, it should be clear why the topics above (design, governance, organization, etc) are critical to being able to efficiently manage a complex application.  To summarize, it's all about competitive advantage...  You will need to reduce costs and improve time to market with the goal of providing a better experience for your end customers.  You can reduce cost by reducing development time, time allocated to testing (don't have to test the corporate foundation module over and over again - do it once), and optimizing operations.  With an efficient design, you can improve your time to market and your business will be more flexible  and agile.  Over time, you'll find that you're becoming more focused on offering functionality that is new to the market (creativity) and this will be rewarded - you're now a leader. In addition to the above, you'll realize soft benefits as well.  Your staff will be operating in a culture based on sharing.  You'll want to reward efforts to improve and enhance the foundation as this will benefit everyone.  This culture will inspire innovation, which can only lend itself to your competitive advantage.

    Read the article

  • Key ATG architecture principles

    - by Glen Borkowski
    Overview The purpose of this article is to describe some of the important foundational concepts of ATG.  This is not intended to cover all areas of the ATG platform, just the most important subset - the ones that allow ATG to be extremely flexible, configurable, high performance, etc.  For more information on these topics, please see the online product manuals. Modules The first concept is called the 'ATG Module'.  Simply put, you can think of modules as the building blocks for ATG applications.  The ATG development team builds the out of the box product using modules (these are the 'out of the box' modules).  Then, when a customer is implementing their site, they build their own modules that sit 'on top' of the out of the box ATG modules.  Modules can be very simple - containing minimal definition, and perhaps a small amount of configuration.  Alternatively, a module can be rather complex - containing custom logic, database schema definitions, configuration, one or more web applications, etc.  Modules generally will have dependencies on other modules (the modules beneath it).  For example, the Commerce Reference Store module (CRS) requires the DCS (out of the box commerce) module. Modules have a ton of value because they provide a way to decouple a customers implementation from the out of the box ATG modules.  This allows for a much easier job when it comes time to upgrade the ATG platform.  Modules are also a very useful way to group functionality into a single package which can be leveraged across multiple ATG applications. One very important thing to understand about modules, or more accurately, ATG as a whole, is that when you start ATG, you tell it what module(s) you want to start.  One of the first things ATG does is to look through all the modules you specified, and for each one, determine a list of modules that are also required to start (based on each modules dependencies).  Once this final, ordered list is determined, ATG continues to boot up.  One of the outputs from the ordered list of modules is that each module can contain it's own classes and configuration.  During boot, the ordered list of modules drives the unified classpath and configpath.  This is what determines which classes override others, and which configuration overrides other configuration.  Think of it as a layered approach. The structure of a module is well defined.  It simply looks like a folder in a filesystem that has certain other folders and files within it.  Here is a list of items that can appear in a module: MyModule: META-INF - this is required, along with a file called MANIFEST.MF which describes certain properties of the module.  One important property is what other modules this module depends on. config - this is typically present in most modules.  It defines a tree structure (folders containing properties files, XML, etc) that maps to ATG components (these are described below). lib - this contains the classes (typically in jarred format) for any code defined in this module j2ee - this is where any web-apps would be stored. src - in case you want to include the source code for this module, it's standard practice to put it here sql - if your module requires any additions to the database schema, you should place that schema here Here's a screenshots of a module: Modules can also contain sub-modules.  A dot-notation is used when referring to these sub-modules (i.e. MyModule.Versioned, where Versioned is a sub-module of MyModule). Finally, it is important to completely understand how modules work if you are going to be able to leverage them effectively.  There are many different ways to design modules you want to create, some approaches are better than others, especially if you plan to share functionality between multiple different ATG applications. Components A component in ATG can be thought of as a single item that performs a certain set of related tasks.  An example could be a ProductViews component - used to store information about what products the current customer has viewed.  Components have properties (also called attributes).  The ProductViews component could have properties like lastProductViewed (stores the ID of the last product viewed) or productViewList (stores the ID's of products viewed in order of their being viewed).  The previous examples of component properties would typically also offer get and set methods used to retrieve and store the property values.  Components typically will also offer other types of useful methods aside from get and set.  In the ProductViewed component, we might want to offer a hasViewed method which will tell you if the customer has viewed a certain product or not. Components are organized in a tree like hierarchy called 'nucleus'.  Nucleus is used to locate and instantiate ATG Components.  So, when you create a new ATG component, it will be able to be found 'within' nucleus.  Nucleus allows ATG components to reference one another - this is how components are strung together to perform meaningful work.  It's also a mechanism to prevent redundant configuration - define it once and refer to it from everywhere. Here is a screenshot of a component in nucleus:  Components can be extremely simple (i.e. a single property with a get method), or can be rather complex offering many properties and methods.  To be an ATG component, a few things are required: a class - you can reference an existing out of the box class or you could write your own a properties file - this is used to define your component the above items must be located 'within' nucleus by placing them in the correct spot in your module's config folder Within the properties file, you will need to point to the class you want to use: $class=com.mycompany.myclass You may also want to define the scope of the class (request, session, or global): $scope=session In summary, ATG Components live in nucleus, generally have links to other components, and provide some meaningful type of work.  You can configure components as well as extend their functionality by writing code. Repositories Repositories (a.k.a. Data Anywhere Architecture) is the mechanism that ATG uses to access data primarily stored in relational databases, but also LDAP or other backend systems.  ATG applications are required to be very high performance, and data access is critical in that if not handled properly, it could create a bottleneck.  ATG's repository functionality has been around for a long time - it's proven to be extremely scalable.  Developers new to ATG need to understand how repositories work as this is a critical aspect of the ATG architecture.   Repositories essentially map relational tables to objects in ATG, as well as handle caching.  ATG defines many repositories out of the box (i.e. user profile, catalog, orders, etc), and this is comprised of both the underlying database schema along with the associated repository definition files (XML).  It is fully expected that implementations will extend / change the out of the box repository definitions, so there is a prescribed approach to doing this.  The first thing to be sure of is to encapsulate your repository definition additions / changes within your own module (as described above).  The other important best practice is to never modify the out of the box schema - in other words, don't add columns to existing ATG tables, just create your own new tables.  These will help ensure you can easily upgrade your application at a later date. xml-combination As mentioned earlier, when you start ATG, the order of the modules will determine the final configpath.  Files within this configpath are 'layered' such that modules on top can override configuration of modules below it.  This is the same concept for repository definition files.  If you want to add a few properties to the out of the box user profile, you simply need to create an XML file containing only your additions, and place it in the correct location in your module.  At boot time, your definition will be combined (hence the term xml-combination) with the lower, out of the box modules, with the result being a user profile that contains everything (out of the box, plus your additions).  Aside from just adding properties, there are also ways to remove and change properties. types of properties Aside from the normal 'database backed' properties, there are a few other interesting types: transient properties - these are properties that are in memory, but not backed by any database column.  These are useful for temporary storage. java-backed properties - by nature, these are transient, but in addition, when you access this property (by called the get method) instead of looking up a piece of data, it performs some logic and returns the results.  'Age' is a good example - if you're storing a birth date on the profile, but your business rules are defined in terms of someones age, you could create a simple java-backed property to look at the birth date and compare it to the current date, and return the persons age. derived properties - this is what allows for inheritance within the repository structure.  You could define a property at the category level, and have the product inherit it's value as well as override it.  This is useful for setting defaults, with the ability to override. caching There are a number of different caching modes which are useful at different times depending on the nature of the data being cached.  For example, the simple cache mode is useful for things like user profiles.  This is because the user profile will typically only be used on a single instance of ATG at one time.  Simple cache mode is also useful for read-only types of data such as the product catalog.  Locked cache mode is useful when you need to ensure that only one ATG instance writes to a particular item at a time - an example would be a customers order.  There are many options in terms of configuring caching which are outside the scope of this article - please refer to the product manuals for more details. Other important concepts - out of scope for this article There are a whole host of concepts that are very important pieces to the ATG platform, but are out of scope for this article.  Here's a brief description of some of them: formhandlers - these are ATG components that handle form submissions by users. pipelines - these are configurable chains of logic that are used for things like handling a request (request pipeline) or checking out an order. special kinds of repositories (versioned, files, secure, ...) - there are a couple different types of repositories that are used in various situations.  See the manuals for more information. web development - JSP/ DSP tag library - ATG provides a traditional approach to developing web applications by providing a tag library called the DSP library.  This library is used throughout your JSP pages to interact with all the ATG components. messaging - a message sub-system used as another way for components to interact. personalization - ability for business users to define a personalized user experience for customers.  See the other blog posts related to personalization.

    Read the article

  • Webpage loading with wrong content-type after setting up CloudFlare

    - by Daniel Little
    I recently migrated my blog to the Ghost service, I've also setup an alias DNS record with CloudFlare. While showing the blog to a colleague I discovered one of the posts wasn't loading properly and would instead prompt to be downloaded with an application/octet-stream content-type. I can view all the pages without any issues and I believe we're both on the same network as well. Has anyone received a wrong content type like application/octet-stream using CloudFlare, or know what I can do to correct this?

    Read the article

  • How will Deja-Dup operates when backing up to an external USB drive?

    - by Little Bobby Tables
    I want to set up regular backups, and deja-dup seems like a nice tool. However, I want to put my backups on an extension USB drive that I have, not on a remote network location. Naturally, this drive is not always connected. If I configure deja-dup to backup to a directory on this drive (e.g. /media/extention/backup), what would happen? Will it prompt me to connect the drive when it is missing (the desired behavior), or just fail silently? Is there some way to tweak it to do so? I can roll my own cron-based backup script that checks if this drive is mounted, but I would really prefer to use an existing, integrated tool.

    Read the article

  • Can't print, CUPS package corrupted and hangs on re-install

    - by Little Bobby Tables
    When I upgraded to Ubuntu 10.4 (Maverick), the upgrade process got stuck on the post-installation of the CUPS package. I had to kill processes and run several forced updates before I could finally get regular updated. Ever since I can't print - The printed file gets messed up and crashes the printer. I also can't re-install CUPS, as each time the installation hangs and I have to kill it before it completes. I tried to find a workaround for this problem, but in vain. Does anyone know how to bypass this? Or at least why can the post-installation hang, and how to re-install a problematic package? Some system specs and other hints: Dell D630 laptop running Ubuntu 10.4, Gnome desktop, standard LAN network, printing to an LPD server. Everything worked fine on 9.10. Also, the printed files themselves are not corrupted. The problem does not seem to be Evince-specific, but common to all printouts.

    Read the article

  • Where can I learn image processing? [on hold]

    - by Little Child
    I am learning image processing on my own and I have managed to teach myself a fair few things like: Making images grayscale using 3 different methods Applying a 'pixellate' filter Applying a 'pointillize' filter Make images out of lines Now, I want to take my knowledge further but I do not know how. Adding more information: I am interested in making software like Photoshop or Gimp (although it won't be half as powerful as these 2). So, I want to learn to apply various creative effects to an image. Can someone please suggest resources for this??

    Read the article

  • Looking for an old classic book about Unix command-line tools

    - by Little Bobby Tables
    I am looking for a book about the Unix command-line toolkit (sh, grep, sed, awk, cut, etc.) that I read some time ago. It was an excellent book, but I totally forgot its name. The great thing about this specific book was the running example. It showed how to implement a university bookkeeping system using only text-processing tools. You would find a student by name with grep, update grades with sed, calculate average grades with awk, attach grades to IDs with cut, and so on. If my memory serve, this book had a black cover, and was published circa 1980. Does anyone remember this book? I would appreciate any help in finding it.

    Read the article

  • Pasting from vim in terminal to Google Docs (Firefox + Vimperator) - need to understand

    - by LIttle Ancient Forest Kami
    I had some trouble with copy-pasting text from vim in terminal to Google Docs (aka Drive) document (hereafter GDd) in FF browser (with Vimperator). Note: I have a file opened in Vim 7.2 in terminal :version displays both +clipboard and +xterm-clipboard I'm on Ubuntu 10.04 LTS, so I don't think that's Unity-related I want to use Vim, not GVim, nor gedit... I'm avid fan of mouseless navigation, so solution with mouse was not what I wanted. I have the solution, but I need understanding. What I tried and where it gets me: Yanking whole file text via: ggvGy allows me to: paste it via mouse middle button, NOT with Ctrl+v or Shift+Insert here, in text area for entering question text in gedit but NOT in GDd where I want it pasted, even if I switch Vimperator to pass-through mode with Insert does NOT show in XClip after xclip -o From gedit, I can copy-paste the text into GDd (Vimperator's pass-through mode not required). :%! !xclip -i (or :first, last) reports whole file (all lines, to be precise) as filtered, though shell returns 1 `xclip -o' returns nothing (is empty) or returns previously copied value with 2. no surprise, but I can't paste at all not only to GDd but also to gedit or here setting clipboard (:set clipboard=unnamed) to unnamed doesn't help using "+y or "*y on whole file text actually does the trick So, the question (it's actually three, say "split" and I will): why middle mouse button pastes different things than Ctrl+v and how to know what will be pasted with each? why just yanking (without registers) works with mouse but not with keyboard / XClip? why didn't unnamed register help? After setting, it should make unnamed and * registers same?

    Read the article

  • Should one reject over-scoped projects?

    - by Little Child
    I spoke to my first potential client today and he told me about the requirements of his project - an Android app. He is a well-known designer / photographer in my country and now wants me to "convert the website into an app, custom-tailored". So the requirements, details stripped out, are as follows: eCommerce Aggregating all his content like videos, blogs, tweets, etc. into the app Live streaming any of his studio demos Augmented reality. So that people can see what his painting will look like on their wall before they buy it Taxi Sharing Now, for a freelance project, it seems too over-scoped. I am not saying that I cannot do it. I can. But let me be realistic: There is a steep learning curve when it comes to VR. I am not a tester. I have never white-box tested my own apps. I always black-box test. Since he is a renowned artist, something short of perfect might harm his public image So, I asked him for 2 weeks' worth of time before I give him the final answer. Now knowing whom to consult for advise, I am posting the question here. Although interesting and personally challenging, I am split-minded about accepting a project like this. I will be the only developer for this. Should one reject a project that seems to be over-scoped for one's own abilities?

    Read the article

  • Dead (nearly blank) laptop screen, secondary screen works - how to fix?

    - by LIttle Ancient Forest Kami
    My laptop screen is black while my secondary screen is fine. What I tried: setting brightness (Fn keys) - no effect, no change seen, also on secondary screen removing static electricity like suggested here - no effect restarting / charging battery, running on battery / "wall" power - no effect as well wait to see if warming it up helps - it doesn't follow official Ubuntu diagnostics - checking now... What I will try next: check last updates I've made IIRC I am running on nomodeset already, but can't recall how to verify this Further symptoms: can't see BIOS screen system loads and works fine, just screen has problems screen works (occasionally I could glimpse very dimly what was going on, but it was like with minimum brightness set - nearly non-distinguishable from just a black screen) Any ideas how to proceed best? What is most probable cause?

    Read the article

  • Dim (NEARLY blank) laptop screen, secondary screen works - why?

    - by LIttle Ancient Forest Kami
    My laptop screen is (almost) black while my secondary screen is fine. I believe it to be backlight / brightness related. Problem description it starts when I start the laptop system loads and works fine, just screen has problems I can see the screen though very faintly / dimly - it's hard to see anything which ain't very white e.g. starting screen has big Thinkpad logo in white, large font - I can see it, though very dimly second screen works very well Official backligtht debugging: using acpi setting as prescribed there for Thinkpads didn't help I can see an entry in /sys/class/backlight/ and it changes when I press hotkeys for brightness (current backlight power for instance goes up or down) acpi-off didn't helpm neither did acpi_backlight=vendor Hardware data Laptop is Thinkpad Edge with glossy screen. 4 processors, 2 cores, exemplary CPU data from cat /proc/cpuinfo reports Genuine Intel i5 (M 480 @ 2.67GHz). OS is Ubuntu Lucid, 10.04 LTS, 64-bit, with Linux generic kernel (2.6.32-44) and GNOME 2.32.2 (though I doubt there lies the problem). $ lspci | grep VGA 01:00.0 VGA compatible controller: ATI Technologies Inc M92 [Mobility Radeon HD 4500 Series] $ lshw -C display *-display description: VGA compatible controller product: M92 [Mobility Radeon HD 4500 Series] vendor: ATI Technologies Inc physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:33 memory:c0000000-dfffffff(prefetchable) ioport:2000(size=256) memory:f0300000-f030ffff memory:f0320000-f033ffff(prefetchable) Driver I was NOT running any proprietary drivers, just checked with "Hardware drivers". There is one for ATI that is suggested there, though I didn't need it so far. UPDATE: changing the driver to proprietary one (ATI/AMD FGLRX) didn't help. Tried and failed Resetting / running on power or battery / charging / getting rid of static electricity / warming up *doesn't help* This is NOT a blank-screen problem, at least it isn't following official Ubuntu black-screen diagnostics - I can see my screen, though barely. What I will try next: - check last updates I've made - IIRC I am running on nomodeset already, but will verify this Any ideas how to proceed best? What is most probable cause?

    Read the article

  • How to make my proxy settings change depending on the network I connect to?

    - by Little Jawa
    My company's corporate network requires me to set a network proxy to access the net, but when I am anywhere else, I don't need it. The proxy settings in Ubuntu (System - Preferences - Proxy server) allowed me to create "locations" that I can manually select. Then I have a "default" location (with no proxy) and a "work" location (with my company's proxy in it). Is there a way to make Ubuntu automatically select the "work" location based on the connection I'm using? I thought I could use the IP subnet (very specific) to detect where I am, but I have no idea how to set it up... Edit: I really need to have the proxy settings set at the system level. All my network connections (IMAP, SMTP, chat, etc) need to go through the proxy. Not only the web browser.

    Read the article

  • Help with stock ticker style scrolling using Core Animation

    - by Glen Harding
    Hi, I'm looking for some guidance on the best way to implement stock ticker style right-to-left scrolling of CALayers in Core Animation on OSX. I'm pretty new to Cocoa and don't know the best way to implement this. I have a continuous stream of news items and stock details that I turn into CALayers (made up of 1 image and a CATextLayer) and I want to animate the CALayers from right to left of my custom view. The news and stock information is constantly updating so I would like to add 1 item at a time to the view, scroll it right to left until the right-most point of the CALayer is showing, then add another CALayer to the view and start scrolling that as well. I would like to do this dynamic updating instead of taking a big snapshot of all my data, turning it into a big horizontal CALayer and scrolling that. I'm looking for guidance on how to achieve this sort of effect - do I manually animate each new CALayer along a path in the view? Or should I be using CAScrollLayer to achieve this effect? Many thanks Glen.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >