Search Results

Search found 5327 results on 214 pages for 'john brown evans'.

Page 6/214 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • Programmation fonctionnelle en C++, un article de John Carmack traduit pas Arzar

    L'une des forces du langage C++ est d'être multi paradigme. Même si à la base, le C++ n'est pas un langage fonctionnel, il offre ne nombreuses fonctionnalités permettant d'implémenter certains concepts de la programmation fonctionnelles. Dans cet article, John Carmack analyse certains de ces concepts, leurs intérêts pratiques et comment les utiliser en C++. Programmation fonctionnelle en C++ Avez-vous déjà utilisé la programmation fonctionnelle en C++ ? Que pensez-vous des concepts de l...

    Read the article

  • The Evolution of Oracle Direct EMEA by John McGann

    - by user769227
    John is expanding his Dublin based team and is currently recruiting a Director with marketing and sales leadership experience: http://bit.ly/O8PyDF Should you wish to apply, please send your CV to [email protected] Hi, my name is John McGann and I am part of the Oracle Direct management team, based in Dublin.   Today I’m writing from the Oracle London City office, right in the heart of the financial district and up to very recently at the centre of a fantastic Olympic Games. The Olympics saw individuals and teams from across the globe competing to decide who is Citius, Altius, Fortius - “Faster, Higher, Stronger" There are lots of obvious parallels between the competitive world of the Olympics and the Business environments that many of us operate in, but there are also some interesting differences – especially in my area of responsibility within Oracle. We are of course constantly striving to be the best - the best solution on offer for our clients, bringing simplicity to their management, consumption and application of information technology, and the best provider when compared with our many niche competitors.   In Oracle and especially in Oracle Direct, a key aspect of how we achieve this is what sets us apart from the Olympians.  We have long ago eliminated geographic boundaries as a limitation to what we can achieve. We assemble the strongest individuals across multiple countries and bring them together in teams focussed on a single goal. One such team is the Oracle Direct Sales Programs team. In case you don’t know, Oracle Direct EMEA (Europe Middle East and Africa) is the inside sales division in Oracle and it is where I started my Oracle career.  I remember that my first role involved putting direct mail in envelopes.... things have moved on a bit since then – for me, for Oracle Direct and in how we interact with our customers. Today, the team of over 1000 people is located in the different Oracle Direct offices around Europe – the main ones are Malaga, Berlin, Prague and Dubai plus the headquarters in Dublin. We work in over 20 languages and are in constant contact with current and future Oracle customers, using the latest internet and telephone technologies to effectively communicate and collaborate with each other, our customers and prospects. One of my areas of responsibility within Oracle Direct is the Sales Programs team. This team of 25 people manages the planning and execution of demand generation, leading the process of finding new and incremental revenue within Oracle Direct. The Sales Programs Managers or ‘SPMs’ are embedded within each of the Oracle Direct sales teams, focussed on distinct geographies or product groups. The SPMs are virtual members of the regional sales management teams, and work closely with the sales and marketing teams to define and deliver demand generation activities. The customer contact elements of these activities are executed via the Oracle Direct Sales and Business Development/Lead Generation teams, to deliver the pipeline required to meet our revenue goals. Activities can range from pan-EMEA joint sales and marketing campaigns, to very localised niche campaigns. The campaigns might focus on particular segments of our existing customers, introducing elements of our evolving solution portfolio which customers may not be familiar with. The Sales Programs team also manages ‘Nurture’ activities to ensure that we develop potential business opportunities with contacts and organisations that do not have immediate requirements. Looking ahead, it is really important that we continue to evolve our ability to add value to our clients and reduce the physical limitations of our distance from them through the innovative application of technology. This enables us to enhance the customer buying experience and to enable the Inside Sales teams to manage ever more complex sales cycles from start to finish.  One of my expectations of my team is to actively drive innovation in how we leverage data to better understand our customers, and exploit emerging technologies to better communicate with them.   With the rate of innovation and acquisition within Oracle, we need to ensure that existing and potential customers are aware of all we have to offer that relates to their business goals.   We need to achieve this via a coherent communication and sales strategy to effectively target the right people using the most effective medium. This is another area where the Sales Programs team plays a key role.

    Read the article

  • John Burke's Weclome to the Applications Strategy Blog

    - by Tony Ouk
    Hi I'm John Burke and I'm the group Vice President of Oracle's Applications Business Unit.  Thanks for stopping by our Applications blog today.  The purpose of this site is to provide you, our customers, with timely, relevant, and balanced information about the state of the applications business, both here at Oracle and industry-wide. So on this site, you'll find information about Oracle's application products, how our customers have used those products to transform their businesses, and general industry trends which might help you craft YOUR applications roadmap.  So right now I'm walking to meet with one of Oracle's development executives.  I also plan to talk to Oracle customers and leading industry analysts.  I plan to provide a complete and balanced view of the total applications landscape.  I hope you check back often and view our updates.

    Read the article

  • Employee Engagement Q&A with John Brunswick

    - by Kellsey Ruppel
    As we are focusing this week on Employee Engagement, I recently sat down with industry expert and thought leader John Brunswick on the topic. Here is the Q&A dialogue we shared.  Q: How do you effectively engage employees to drive business value?A: Motivation, both extrinsic and intrinsic, combined with the relevancy of various channels to support it.  Beyond chaining business strategies like compensation models within an organization, engagement ultimately is most successful when driven by employee's motivations.  Business value derived from engagement through technical capabilities can be objectively measured through metrics like the rate and accuracy of problem solving for a given business function or frequency of innovation created.  Providing employees performing "knowledge work" with capabilities that allow them to perform work with a higher degree of accuracy in the same or ideally less time, adds value for that individual and in turn, drives their level of engagement to drive business value. Q: Organizations with high levels of employee engagement outperform the total stock market index by 22%. Can you comment on why you think this might be? A: Alignment through shared purpose.  Zappos is an excellent example of a culture that arguably has higher than average levels of employee engagement and it permeates every aspect of their organization – embodied externally through their customer experience.  I recently made my first purchase with them and it was obvious through their web experience, visual design, communication style, customer service and attention to detail down to green packaging, that they have an amazingly strong shared purpose.  The Zappos.com ‘About page’ outlines their "Family Core Values", the first three being "Deliver WOW Through Service, Embrace and Drive Change & Create Fun and A Little Weirdness" – all reflected externally in my interaction with them.  Strong shared purpose enables higher product and service experience, equating to a dedicated customer base, repeat purchases and expanded marketshare. Q: Have you seen any trends in the market regarding employee engagement? A: Some companies now see offering a form of social engagement similar to Facebook and LinkedIn as standard communication infrastructure like email or instant messaging.  Originally offered as standalone tools, the value is now seen when these capabilities are offered in an integrated fashion in the context of business entities.  An emerging area of focus is around employee activities related to their organization on external social platforms, implicitly creating external communities with employees acting on behalf of the brand and interacting with each other (e.g. Twitter).  Companies have reached a formal understand that this now established communication medium requires strategies allowing employees to engage.  I have personally met colleagues from Oracle, like Oracle User Experience Director Ultan O'Broin (@ultan), via Twitter before meeting first through internal channels. Q: Employee engagement is important, but what about engaging customers and partners? A: The last few years we have witnessed an interesting evolution from the novelty of self-service to expectations of "intelligent" self-service.  From a consumer standpoint, engagement can end up being a key differentiator, especially in mature markets.  Customers that perform some level of interaction with a brand develop greater affinity for the brand and have a greater probability of acting as an advocate.  As organizations move toward a model of deeper engagement, they must ensure that their business is positioned to support deeper relationships, offering potentially greater transparency. From a partner standpoint greater engagement can lead to new types of business opportunities, much in the way that Amazon.com offers a unified shopping experience that can potentially span various vendors.  This same model can be extended to blending services and product delivery models, based on a closeness not easily possible before increased capability of engagement mechanisms. Q: What types of solutions are available to successfully deliver employee engagement? A: Solutions enabling higher levels of engagement do so on the basis of relevancy.  This relevancy is generally supported by aspects of content management, social collaboration, business intelligence, portal and process management technologies.  These technologies can help deliver an experience tailored to a given role or process within an organization that applies equally to work that is structured or unstructured, appearing in the form of functionality as simple as an online employee directory search, knowledge communities supported by social collaboration, as well as more feature rich business intelligence dashboards and portals. Looking to learn more about how to effectively engage your employees? Check out this webcast, or read more from John Brunswick. 

    Read the article

  • I don't have permission to access other drives

    - by mcjohnalds45
    After messing with the user accounts & names, I found I can't access my external drives without using sudo. So when I access one normally with cd "/media/john/FreeAgent Drive" I receive bash: cd: /media/john/FreeAgent Drive: Permission denied However, using sudo: sudo cd /media/john sudo ls -l It gives: drwx------ 1 john john 20480 Sep 24 10:45 FreeAgent Drive/ And id returns uid=1003(john) gid=1003(john) groups=1003(john), ... So I'm interpreting this is as "you are john, only john can access this drive, however, you cannot access this drive." I have tried sudo chown john:john "FreeAgent Drive" and sudo chmod o+rw "john/FreeAgent Drive"but I still can't access it.

    Read the article

  • Does RVM "failover" to another ruby instance on error?

    - by JohnMetta
    Have a strange problem in that I have a Rake task that seems to be using multiple versions of Ruby. When one fails, it seems to try another one. Details MacBook running 10.6.5 rvm 1.1.0 Rubies: 1.8.7-p302, ree-1.8.7-2010.02, ruby-1.9.2-p0 Rake 0.8.7 Gem 1.3.7 Veewee (provisioning Virtual Machines using Opcode.com, Vagrant and Chef) I'm not entirely sure the specific details of the error matter, but since it might be an issue with Veewee itself. So, what I'm trying to do is build a new box base on a veewee definition. The command fails with an error about a missing method- but what's interesting is how it fails. Errors I managed to figure out that if I only have one Ruby installed with RVM, it just fails. If I have more than one Ruby install, it fails at the same place, but execution seems to continue in another interpreter. Here are two different clipped console outputs. I've clipped them for size. The full outputs of each error are available as a gist. One Ruby version installed Here is the command run when I only have a single version of Ruby (1.8.7) available in RVM boudica:veewee john$ rvm rake build['mettabox'] --trace rvm 1.1.0 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/] (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build … creating new harddrive rake aborted! undefined method `max_vdi_size' for #<VirtualBox::SystemProperties:0x102d6af80> /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/virtualbox-0.8.3/lib/virtualbox/abstract_model/dirty.rb:172:in `method_missing' <------ stacktraces cut ----------> /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/bin/rake:31 /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19:in `load' /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19 Multiple Ruby Versions Here is the same command run with three versions of Ruby available in RVM. Prior to doing this, I used "rvm use 1.8.7." Again, I don't know how important the details of the specific errors are- what's interesting to me is that there are three separate errors- each with it's own stacktrace- and each in a different Ruby interpreter. Look at the bottom of each stacktrace and you'll see that they are all sourced from different interpreter locations- First ree-1.8.7, then ruby-1.8.7, then ruby-1.9.2: boudica:veewee john$ rvm rake build['mettabox'] --trace rvm 1.1.0 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/] (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build … creating new harddrive rake aborted! undefined method `max_vdi_size' for #<VirtualBox::SystemProperties:0x1059dd608> /Users/john/.rvm/gems/ree-1.8.7-2010.02/gems/virtualbox-0.8.3/lib/virtualbox/abstract_model/dirty.rb:172:in `method_missing' … /Users/john/.rvm/gems/ree-1.8.7-2010.02/gems/rake-0.8.7/bin/rake:31 /Users/john/.rvm/gems/ree-1.8.7-2010.02@global/bin/rake:19:in `load' /Users/john/.rvm/gems/ree-1.8.7-2010.02@global/bin/rake:19 (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build isofile ubuntu-10.04.1-server-amd64.iso is available ["a1b857f92eecaf9f0a31ecfc39dee906", "30b5c6fdddbfe7b397fe506400be698d"] [] Last good state: -1 Current step: 0 last good state -1 destroying machine+disks (re-)executing step 0-initial-a1b857f92eecaf9f0a31ecfc39dee906 VBoxManage: error: Machine settings file '/Users/john/VirtualBox VMs/mettabox/mettabox.vbox' already exists VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Machine, interface IMachine, callee nsISupports Context: "CreateMachine(bstrSettingsFile.raw(), name.raw(), osTypeId.raw(), Guid(id).toUtf16().raw(), FALSE , machine.asOutParam())" at line 247 of file VBoxManageMisc.cpp rake aborted! undefined method `memory_size=' for nil:NilClass /Users/john/Work/veewee/lib/veewee/session.rb:303:in `create_vm' /Users/john/Work/veewee/lib/veewee/session.rb:166:in `build' /Users/john/Work/veewee/lib/veewee/session.rb:560:in `transaction' /Users/john/Work/veewee/lib/veewee/session.rb:163:in `build' /Users/john/Work/veewee/Rakefile:87 /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/lib/rake.rb:636:in `call' /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/lib/rake.rb:636:in `execute' /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/lib/rake.rb:631:in `each' … /Users/john/.rvm/gems/ruby-1.8.7-p302/gems/rake-0.8.7/bin/rake:31 /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19:in `load' /Users/john/.rvm/gems/ruby-1.8.7-p302@global/bin/rake:19 (in /Users/john/Work/veewee) ** Invoke build (first_time) ** Execute build isofile ubuntu-10.04.1-server-amd64.iso is available ["a9c4ab3257e1da3479c984eae9905c2a", "30b5c6fdddbfe7b397fe506400be698d"] [] Last good state: -1 Current step: 0 last good state -1 (re-)executing step 0-initial-a9c4ab3257e1da3479c984eae9905c2a VBoxManage: error: Machine settings file '/Users/john/VirtualBox VMs/mettabox/mettabox.vbox' already exists VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Machine, interface IMachine, callee nsISupports Context: "CreateMachine(bstrSettingsFile.raw(), name.raw(), osTypeId.raw(), Guid(id).toUtf16().raw(), FALSE , machine.asOutParam())" at line 247 of file VBoxManageMisc.cpp rake aborted! undefined method `memory_size=' for nil:NilClass /Users/john/Work/veewee/lib/veewee/session.rb:303:in `create_vm' /Users/john/Work/veewee/lib/veewee/session.rb:166:in `block in build' /Users/john/Work/veewee/lib/veewee/session.rb:560:in `transaction' /Users/john/Work/veewee/lib/veewee/session.rb:163:in `build' /Users/john/Work/veewee/Rakefile:87:in `block in <top (required)>' /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:634:in `call' /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:634:in `block in execute' … /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:2013:in `top_level' /Users/john/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/rake.rb:1992:in `run' /Users/john/.rvm/rubies/ruby-1.9.2-p0/bin/rake:35:in `<main>' It isn't until we reach the last installed version of Ruby that execution halts. Discussion Does anyone have any idea what's going on here? Has anyone seen this "failover"-like behavior before? It seems strange to me that the first exception would not halt execution as it did with one interpreter, but I wonder if there are things happening when RVM is installed that we Ruby developers are not considering.

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Web Experience Management: Segmentation & Targeting - Chalk Talk with John

    - by Michael Snow
    Today's post comes from our WebCenter friend, John Brunswick.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Having trouble getting your arms around the differences between Web Content Management (WCM) and Web Experience Management (WEM)?  Told through story, the video below outlines the differences in an easy to understand manner. By following the journey of Mr. and Mrs. Smith on their adventure to find the best amusement park in two neighboring towns, we can clearly see what an impact context and relevancy play in our decision making within online channels.  Just as when we search to connect with the best products and services for our needs, the Smiths have their grandchildren coming to visit next week and finding the best park is essential to guarantee a great family vacation.  One town effectively Segments and Targets visitors to enhance their experience, reducing the effort needed to learn about their park. Have a look below to join the Smiths in their search.    Learn MORE about how you might measure up: Deliver Engaging Digital Experiences Drive Digital Marketing SuccessAccess Free Assessment Tool

    Read the article

  • Installing XAMPP in Xubuntu 13.10

    - by illage2
    I downloaded the XAMPP .run file from Apacheandfriends but the installation isn't working for me. I can't seem to navigate to my downloads folder and it just keeps saying command not found all the time. root@john-Aspire-V3-531:/home/john# cd ~/downloads bash: cd: /root/downloads: No such file or directory root@john-Aspire-V3-531:/home/john# cd ~/Downloads bash: cd: /root/Downloads: No such file or directory root@john-Aspire-V3-531:/home/john# /downloads bash: /downloads: No such file or directory root@john-Aspire-V3-531:/home/john# cd /downloads bash: cd: /downloads: No such file or directory root@john-Aspire-V3-531:/home/john# cd downloads bash: cd: downloads: No such file or directory root@john-Aspire-V3-531:/home/john# downloads downloads: command not found What do I need to do? Apacheandfriends says to: chmod 755 xampp-linux-1.8.2-0-installer.run and then ./xampp-linux-1.8.2-0-installer.run but it doesn't seem to think that the file exists. Can anyone help me?

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Java Spotlight Episode 57: Live From #Devoxx - Ben Evans and Martijn Verburg of the London JUG with Yara Senger of SouJava

    - by Roger Brinkley
    Tweet Live from Devoxx 11,  an interview with Ben Evans and Martijn Verburg from the London JUG along with  Yara Senger from the SouJava JUG on the JCP Executive Committee Elections, JSR 248, and Adopt-a-JSR program. Both the London JUG and SouJava JUG are JCP Standard Edition Executive Committee Members. Joining us this week on the Java All Star Developer Panel are Geertjan Wielenga, Principal Product Manger in Oracle Developer Tools; Stephen Chin, Java Champion and Java FX expert; and Antonio Goncalves, Paris JUG leader. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News Netbeans 7.1 JDK 7 upgrade tools Netbeans First Patch Program OpenJFX approved as an OpenJDK project Devoxx France April 18-20, 2012 Events Nov 22-25, OTN Developer Days in the Nordics Nov 22-23, Goto Conference, Prague Dec 6-8, Java One Brazil, Sao Paulo Feature interview Ben Evans has lived in "Interesting Times" in technology - he was the lead performance testing engineer for the Google IPO, worked on the initial UK trials of 3G networks with BT, built award-winning websites for some of Hollywood's biggest hits of the 90s, rearchitected and reimagined technology helping some of the most vulnerable people in the UK and has worked on everything from some of the UKs very first ecommerce sites, through to multi-billion dollar currency trading systems. He helps to run the London Java Community, and represents the JUG on the Java SE/EE Executive Committee. His first book "The Well-Grounded Java Developer" (with Martijn Verburg) has just been published by Manning. Martijn Verburg (aka 'the Diabolical Developer') herds Cats in the Java/open source communities and is constantly humbled by the creative power to be found there. Currently he resides in London where he co-leads the London JUG (a JCP EC member), runs a couple of open source projects & drinks too much beer at his local pub. You can find him online moderating at the Javaranch or discussing (ranting?) subjects on the Prgorammers Stack Exchange site. Most recently he's become a regular speaker at conferences on Java, open source and software development and has recently wrapped up his first Manning title - "The Well-Grounded Java Developer" with his co-author Ben Evans. Yara Senger is the partner and director of teacher education and Globalcode, graduated from the University of Sao Paulo, Sao Carlos, has significant experience in Brazil and abroad in developing solutions to critical Java. She is the co-creator of Java programs Academy and Academy of Web Developer, accumulating over 1000 hours in the classroom teaching Java. She currently serves as the President of Sou Java. In this interview Ben, Martijn, and Yara talk about the JCP Executive Committee Elections, JSR 348, and the Adopt-a-JSR program. Mail Bag What's Cool Show Transcripts Transcript for this show is available here when available.

    Read the article

  • setting up a shared folder in linux

    - by Chris
    I'm trying to set up a folder in my home directory that will be shared with another user but for some reason it is not working this is what I've done, I have tried two different ways using ACL's and chown/chgrp etc I set up a group called say: sharedgroup and added both my user (john) and fred to it so when I run groups john john wheel sharedgroup groups fred sharedgroup fred mkdir /home/john/shared vim /home/john/shared/hello.txt (typed in some text saved it) chown -R :sharedgroup shared chmod -R o=-rwx shared ll drwxrwx--- 2 john sharedgroup 4096 Sep 9 21:14 shared ll shared -rw-rw-r-- 1 john sharedgroup 7 Sep 9 21:14 hello.txt (I also tried adding in the s permissions but that didn't help either) then when I log out of the server and log back in as fred and try these commands they fail vim /home/john/shared/hello.txt (won't allow me to write opens a blank file) cd /home/john/shared -bash: cd: /home/john/cis: Permission Denied ls /home/john/shared -ls: /home/john/shared: Permission Denied ls -lad /home/john/shared -ls: /home/john/shared: Permission Denied id fred uid=500(fred) gid=502(sharedgroup) groups=502(sharedgroup),500(fred) context=user_u:system_r:unconfined_t Any idea what I'm doing wrong??

    Read the article

  • What are the default mount settings for mount / fstab?

    - by John Craick
    What are the default mounting options for a non root partition ? The man entry for mount says ... defaults - use default options: rw, suid, dev, exec, auto, nouser, and async. ... so that might be what we expect to see. But, unless I'm missing something, that's not what happens. I have an ext3 partition labelled "NewHome20G" which is seen as /dev/sdc6 by the system. This we can see from ... root@john-pc1204:~# blkid | grep NewHome20G /dev/sdc6: LABEL="NewHome20G" UUID="d024bad5-906c-46c0-b7d4-812daf2c9628" TYPE="ext3" I have an entry in fstab as follows ... root@john-pc1204:~# cat /etc/fstab | grep NewHome LABEL=NewHome20G /media/NewHome20G ext3 rw,nosuid,nodev,exec,users 0 2 Note the option settings that are specified in that fstab line. Now I look at how the partition is actually mounted after boot up ... root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] ... so, when the filesystem gets mounted the exec & users options I specified seem to have been ignored. Just to be sure, I unmount sdc6, remount it and look at the mount options again ... root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] .... same result Now I unmount the partition again, remount it specifying the exec option and look at the result ... root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 -o exec root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,nosuid,nodev) [NewHome20G] ... and here the exec option has finally taken effect and the noexec setting has vanished. Just for interest, I re-mount the partition with the defaults option root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 -o defaults root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] The noexec is back, so it looks very like rw,noexec,nosuid,nodev are the default options which is NOT what man says. Why does this matter ? I have a folder full of useful scripts stored on a data disk. Because that disk is mounted noexec those scripts won't run, even though they have all been set with chmod 777. I can work round this in several ways but it's disappointing that the man entry seems to be wrong. Have I missed something obvious here or have the default options in Ubuntu changed from what they were a few versions ago ?

    Read the article

  • Microsoft T-SQL Counting Consecutive Records

    - by JeffW
    Problem: From the most current day per person, count the number of consecutive days that each person has received 0 points for being good. Sample data to work from : Date Name Points 2010-05-07 Jane 0 2010-05-06 Jane 1 2010-05-07 John 0 2010-05-06 John 0 2010-05-05 John 0 2010-05-04 John 0 2010-05-03 John 1 2010-05-02 John 1 2010-05-01 John 0 Expected answer: Jane was bad on 5/7 but good the day before that. So Jane was only bad 1 day in a row most recently. John was bad on 5/7, again on 5/6, 5/5 and 5/4. He was good on 5/3. So John was bad the last 4 days in a row. Code to create sample data: IF OBJECT_ID('tempdb..#z') IS NOT NULL BEGIN DROP TABLE #z END select getdate() as Date,'John' as Name,0 as Points into #z insert into #z values(getdate()-1,'John',0) insert into #z values(getdate()-2,'John',0) insert into #z values(getdate()-3,'John',0) insert into #z values(getdate()-4,'John',1) insert into #z values(getdate(),'Jane',0) insert into #z values(getdate()-1,'Jane',1) select * from #z order by name,date desc

    Read the article

  • Chalk Talk with John: How Does SOA Add Value to Your Enterprise?

    - by John Brunswick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In this episode of Chalk Talk with John we revisit our town of Middleware Fields from What Does User Experience Mean to You? to look at demystifying the business value of SOA. Middleware fields is an extremely eco-conscious community and has been trying to setup a commuting program for their employees. Though a good idea, they soon run into challenges ensuring that people are able to use the commuting services easily.  Take a look below to see how SOA is like a transit pass for your enterprise and how it addresses common issues you may have with your enterprise systems. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} About me: Hi, I am John Brunswick, an Oracle Enterprise Architect. As an Oracle Enterprise Architect, I focus on the alignment of technical capabilities in support of business vision and objectives, as well as the overall business value of technology.  Before coming to Oracle, I was a Practice Manager within BEA System's Business Interaction Division consulting organization, orchestrating enterprise systems in support of line of business goals. Follow me on Twitter and visit my site for Oracle Fusion Middleware related tips.

    Read the article

  • Converting John Resig's Templating Engine to work with PHP Templates

    - by Serhiy
    I'm trying to convert the John Resig's new ASP.NET Templating Engine to work with PHP. Essentially what I would like to achieve is the ability to use certain Kohana Views via a JavaScript templating engine, that way I can use the same views for both a standard PHP request and a jQuery AJAX request. I'm starting with the basics and would like to be able to convert http://github.com/nje/jquery-tmpl/blob/master/jquery.tmpl.js To work with php like so... <li><a href="{%= link %}">{%= title %}</a> - {%= description %}</li> <li><a href="<?= $link ?>"><?= $title ?></a> - <?= description ?></li> The RexEx in it is a bit over my head and it's apparently not as easy as changing the %} to ? in lines 148 to 158. Any help would be highly appreciated. I'm also not sure of how to take care of the $ difference that PHP variables have. Thanks, Serhiy

    Read the article

  • Converting John Resig's JavaScript Templating Engine to work with PHP Templates

    - by Serhiy
    I'm trying to convert the John Resig's Templating Engine to work with PHP. Essentially what I would like to achieve is the ability to use certain Kohana Views via a JavaScript templating engine, that way I can use the same views for both a standard PHP request and a jQuery AJAX request. I'm starting with the basics and would like to be able to convert http://github.com/nje/jquery-tmpl/blob/master/jquery.tmpl.js To work with php like so... ### From This ### <li><a href="{%= link %}">{%= title %}</a> - {%= description %}</li> ### Into This ### <li><a href="<?= $link ?>"><?= $title ?></a> - <?= description ?></li> The RexEx in it is a bit over my head and it's apparently not as easy as changing the %} to ? in lines 148 to 158. Any help would be highly appreciated. I'm also not sure of how to take care of the $ difference that PHP variables have. Thanks, Serhiy

    Read the article

  • Syntax Error with John Resig's Micro Templating.

    - by optician
    I'm having a bit of trouble with John Resig's Micro templating. Can anyone help me with why it isn't working? This is the template <script type="text/html" id="row_tmpl"> test content {%=id%} {%=name%} </script> And the modified section of the engine str .replace(/[\r\t\n]/g, " ") .split("{%").join("\t") .replace(/((^|%>)[^\t]*)'/g, "$1\r") .replace(/\t=(.*?)%>/g, "',$1,'") .split("\t").join("');") .split("%}").join("p.push('") .split("\r").join("\\'") + "');}return p.join('');"); and the javascript var dataObject = { "id": "27", "name": "some more content" }; var html = tmpl("row_tmpl", dataObject); and the result, as you can see =id and =name seem to be in the wrong place? Apart from changing the template syntax blocks from <% % to {% %} I haven't changed anything. This is from Firefox. Error: syntax error Line: 30, Column: 89 Source Code: var p=[],print=function(){p.push.apply(p,arguments);};with(obj){p.push(' test content ');=idp.push(' ');=namep.push(' ');}return p.join('');

    Read the article

  • Le Sénateur John McCain appelle au renvoi ou à la démission du chef de la NSA, lors d'une interview accordée à Der Spiegel

    Le Sénateur John McCain appelle au renvoi ou à la démission du chef de la NSA, lors d'une interview accordée à Der Spiegel Dans une interview accordé au quotidien allemand Der Spiegel, l'ancien candidat à la présidence le Sénateur John McCain a appelé à la démission du Général Keith Alexander, commandant en chef de la NSA, après les révélations d'une mise sur écoute du chef de l'Etat allemand Angela Merkel. Pour le Sénateur, cette mise sur écoute est une erreur, et il précise même qu'entre...

    Read the article

  • What’s New for Oracle Commerce? Executive QA with John Andrews, VP Product Management, Oracle Commerce

    - by Katrina Gosek
    Oracle Commerce was for the fifth time positioned as a leader by Gartner in the Magic Quadrant for E-Commerce. This inspired me to sit down with Oracle Commerce VP of Product Management, John Andrews to get his perspective on what continues to make Oracle a leader in the industry and what’s new for Oracle Commerce in 2013. Q: Why do you believe Oracle Commerce continues to be a leader in the industry? John: Oracle has a great acquisition strategy – it brings best-of-breed technologies into the product fold and then continues to grow and innovate them. This is particularly true with products unified into the Oracle Commerce brand. Oracle acquired ATG in late 2010 – and then Endeca in late 2011. This means that under the hood of Oracle Commerce you have market-leading technologies for cross-channel commerce and customer experience, both designed and developed in direct response to the unique challenges online businesses face. And we continue to innovate on capabilities core to what our customers need to be successful – contextual and personalized experience delivery, merchant-inspired tools, and architecture for performance and scalability. Q: It’s not a slow moving industry. What are you doing to keep the pace of innovation at Oracle Commerce? John: Oracle owes our customers the most innovative commerce capabilities. By unifying the core components of ATG and Endeca we are delivering on this promise. Oracle Commerce is continuing to innovate and redefine how commerce is done and in a way that drive business results and keeps customers coming back for experiences tailored just for them. Our January and May 2013 releases not only marked the seventh significant releases for the solution since the acquisitions of ATG and Endeca, we also continue to demonstrate rapid and significant progress on the unification of commerce and customer experience capabilities of the two commerce technologies. Q: Can you tell us what was notable about these latest releases under the Oracle Commerce umbrella? John: Specifically, our latest product innovations give businesses selling online the ability to get to market faster with more personalized commerce experiences in the following ways: Mobile: the latest Commerce Reference Application in this release offers a wider range of examples for online businesses to leverage for iOS development and specifically new iPad reference capabilities. This release marks the first release of the iOS Universal application that serves both the iPhone and iPad devices from a single download or binary. Business users can now drive page content management and layout of search results and category pages, as well as create additional storefront elements such as categories, facets / dimensions, and breadcrumbs through Experience Manager tools. Cross-Channel Commerce: key commerce platform capabilities have been added to support cross-channel commerce, including an expanded inventory model to maintain inventory for stores, pickup in stores and Web-based returns. Online businesses with in-store operations can now offer advanced shipping options on the web and make returns and exchange logic easily available on the web. Multi-Site Capabilities: significant enhancements to the Commerce Platform multi-site architecture that allows business users to quickly launch and manage multiple sites on the same cluster and share data, carts, and other components. First introduced in 2010, with this latest release business users can now partition or share customer profiles, control users’ site-based access, and manage personalization assets using site groups. Internationalization: continued language support and enhancements for business user tools as well and search and navigation. Guided Search now supports 35 total languages with 11 new languages (including Danish, Arabic, Norwegian, Serbian Cyrillic) added in this release. Commerce Platform tools now include localized support for 17 locales with 4 new languages (Danish, Portuguese (European), Finnish, and Thai). No development or customization is required in order for business users to use the applications in any of these supported languages. Business Tool Experience: valuable new Commerce Merchandising features include a new workflow for making emergency changes quickly and increased visibility into promotions rules and qualifications in preview mode. Oracle Commerce business tools continue to become more and more feature rich to provide intuitive, easy- to-use (yet powerful) capabilities to allow business users to manage content and the shopping experience. Commerce & Experience Unification: demonstrable unification of commerce and customer experience capabilities include – productized cartridges that provide supported integration between the Commerce Platform and Experience Management tools, cross-channel returns, Oracle Service Cloud integration, and integrated iPad application. The mission guiding our product development is to deliver differentiated, personalized user experiences across any device in a contextual manner – and to give the business the best tools to tune and optimize those user experiences to meet their business objectives. We also need to do this in a way that makes it operationally efficient for the business, keeping the overall total cost of ownership low – yet also allows the business to expand, whether it be to new business models, geographies or brands. To learn more about the latest Oracle Commerce releases and mission, visit the links below: • Hear more from John about the Oracle Commerce mission • Hear from Oracle Commerce customers • Documentation on the new releases • Listen to the Oracle ATG Commerce 10.2 Webcast • Listen to the Oracle Endeca Commerce 3.1.2 Webcast

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >