Search Results

Search found 1591 results on 64 pages for 'implementations'.

Page 16/64 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Introduction to Oracle's Primavera P6 Analytics

    Primavera P6 Analytics is a new product offering from Oracle which utilizes the Oracle BI product, Oracle Business Intelligence Suite Enterprise Edition Plus (Oracle BI EE Plus), to provide root-cause analysis, manage by exception and early-warning problem indicators from your P6 Enterprise Project Portfolio Management projects and portfolios. Out of the box reports and dashboards provide drill-down and drill-through insight to action where you can jump directly into the problem areas in Primavera P6 Enterprise Project Portfolio Management to course-correct or implement best practices based on successful project implementations. A complete view into all of your projects histories provides the trends and analysis needed to identify the best course of action to take next.

    Read the article

  • Suggestions for Future or On-The-Edge Languages (2011)

    - by Kurtis
    I'm just looking for some suggestions on newer languages and language implementations that are useful for string manipulation. It's now 2011 and a lot has changed over the years. Most of my work includes web development (which is mostly text-based) and command line scripting. I'm pretty language agnostic, although I've felt violated using PHP over the years. My only requirements are that the language be good at text manipulation, without a lot of 3rd party libraries (core libraries are okay, though), and that the language and/or standard implementation is very up to date or even "futuristic". For example, the two main languages I'm looking at right now are Python (Version 3.x) or Perl (Version 6.x). Research, Academic, and Experimental languages are okay with me. I don't mind functional languages although I'd like to have the option of programming in a procedural or even object oriented manner. Thanks!

    Read the article

  • Dependency Injection Confusion

    - by James
    I think I have a decent grasp of what Dependency Inversion principle (DIP) is, my confusion is more around dependency injection. My understanding is the whole point of DI is to decouple parts of an application, to allow changes in one part without effecting another, assuming the interface does not change. For examples sake, we have this public class MyClass(IMyInterface interface) { public MyClass { interface.DoSomething(); } } public interface IMyInterface { void DoSomething(); } How is this var iocContainer = new UnityContainer(); iocContainer.Resolve<MyClass>(); better practice than doing this //if multiple implementations are possible, could use a factory here. IMyInterface interface = new InterfaceImplementation(); var myClass = new MyClass(interface); It may be I am missing a very important point, but I am failing to see what is gained. I am aware that using an IOC container I can easily handle an objects life cycle, which is a +1 but I don't think that is core to what IOC is about.

    Read the article

  • initial Class design: access modifiers and no-arg constructors

    - by yas
    Context: Student working through Class design in personal/side project for Summer. I've never written anything implemented by others or had to maintain code. Trying to maximize encapsulation and imagining what would make code easy to maintain. Concept: Tight/Loose Class design where Tight and Loose refer to access modifiers and constructors. Tight: initially, everything, including setters, is private and a no-arg constructor is not provided (only a full constructor). Loose: not Tight Exceptions: the obvious like toString Reasoning: If code, at the very beginning, is tight, then it should be guaranteed that changes, with respect to access/creation, should never damage existing implementations. The loosening of code happens incrementally and must be thought through, justified, and safe (validated). Benefit: Existing implementing code should not break if changes are made later. Cost: Takes more time to create. Since this is my own thinking, I hope to get feedback as to whether I should push to work this way. Good idea or bad idea?

    Read the article

  • Chargeback and billing across public and private clouds

    - by llaszews
    Had a great conversation today regarding the need for metering, chargeback, and billing of cloud computing resources. The person I spoken with at a Fortune 1000 company increased the scope and magnitude of the issue of billing for cloud computing resources beyond what I had previously considered. I believed that doing any type of chargeback and billing for one public, private or hybrid installation was difficult. This person pointed out that the problem is even bigger in scope. The reality is many companies are using multiple public cloud vendors and have many different private cloud data centers. A customer may use AWS for some smaller public cloud applications, Salesforce.com (SaaS), Rackspace for IaaS, Savvis for colocation and a variety of Iaas and PaaS implementations for the private cloud. How does a company get a consolidated bill for all these different cloud environments? I am not sure their is an answer right now.

    Read the article

  • OSB, Service Callouts and OQL - Part 1

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This post will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. Please refer to the blog post for more details. 

    Read the article

  • Microsoft premier éditeur à publier ses spécifications JavaScript~ JScript 5.7 et 5.8, ses implément

    Microsoft premier éditeur à publier ses spécifications JavaScript JScript 5.7 et 5.8, les implémentations de JavaScript pour Internet Explorer 7 et 8 Microsoft vient de publier les spécifications de JScript : JScript 5.7 pour Internet Explorer 7 et JScript 5.8 pour Internet Explorer 8. Pour mémoire JScript est l'implémentation maison de JavaScript utilisé par Redmond dans ses navigateurs. JavaScript suit les standards de la spécification ECMA-262 (ou ECMAScript) dont le but est d'uniformiser, autant que faire se peut, le cadre de travail des développeurs web. D'où le nom des deux documents récemment publiés sur MSDN : "Internet Explorer ECMA-262 ECMAScript Language ...

    Read the article

  • Are there any free hit counters that don't track users?

    - by David Englund
    Are there any free services that increment a simple hit counter without tracking the users of the site? I would like to know how many visitors there are to my site, excluding bots. I don't need detailed information like unique visitors or where the user is from (in fact, that's exactly what I don't want). I have been researching free hit counters, and it seems that most (all?) of them display advertisements and their terms of service indicate that they can use the data they collect from the client site however they want. Google Analytics also does this and tracks users across sites. The site is static HTML, so an external link or iframe of some sort is easiest for me to implement. I could switch to a Ruby or Node.js back-end, in which case lots of other options open up (like Ruby impressionist and more low-level implementations), but my hosting service is pretty limited. If the answer to my question is simply "no," what are my other options?

    Read the article

  • Are there similarities between operating system kernels and programming language kernels?

    - by rahmu
    I know very little about Smalltalk but I noticed that there's a frequent mention of the "kernel". Dan Ingalls prime maintainer of several implementations of Smalltalk also worked on a Javascript environment called "Lively Kernel" and in Peter Siebel's book he kept mentionning the "kernel". I cannot help but think that it is no coincidence that the creators of Smalltalk used the name of a (central) part of operating systems to refer to a particular component of their language. Was it because Smalltalk intended to act as an operating system? Was it because theory behind programming languages and operating systems have a lot in common? What is the reason behind the common appelation of the two components?

    Read the article

  • SSIS 2008 Configuration Settings Handling Logic for Variables Visualized

    - by Compudicted
    There are many articles discussing the specifics of how the configuration settings are applied including the differences between SSIS 2005 and 2008 version implementations, however this topic keeps resurfacing on MSDN’s SSIS Forum. I thought it could be useful to cover the logic aspect visually. Below is a diagram explaining the basic flow of a variable setting for a case when no parent package is involved.   As you can see the run time stage ignores any command line flags for variables already set in the config file, I realize this is not stressed enough in many publications. Besides, another interesting fact is that the command line dtexec tool is case sensitive for the portion following the package keyword, I mean if you specify your flag to set a new value for a variable like dtexec /f Package.dtsx -set \package.variables[varPkgMyDate].value;02/01/2011 (notice the lover case v in .value) You will get errors. By capitalizing the keyword the package runs successfully.

    Read the article

  • Implement Fast Inverse Square Root in Javascript?

    - by BBz
    The Fast Inverse Square Root from Quake III seems to use a floating-point trick. As I understand, floating-point representation can have some different implementations. So is it possible to implement the Fast Inverse Square Root in Javascript? Would it return the same result? float Q_rsqrt(float number) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; i = 0x5f3759df - ( i >> 1 ); y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); return y; }

    Read the article

  • SAP NetWeaver Cloud Java EE 6 Web Profile Certified!

    - by reza_rahman
    We are very pleased to welcome SAP NetWeaver Cloud to the Java EE 6 family! SAP successfully certified NetWeaver Cloud SDK-2.x.Beta against the Java EE 6 Web Profile TCK. This brings the number of Web Profile implementations to no less than seven and the total number of certified platforms on the official Java EE compatibility page to eighteen. Other Java EE 6 Web Profile platforms include the likes of GlassFish, JBoss AS, Resin and Apache TomEE. Under the hood, SAP NetWeaver Cloud uses EclipseLink, Tomcat and OpenEJB. The NetWeaver team encourages you to try it out and send them feedback. More details here.

    Read the article

  • Oracle University Partner Enablement Update (4th April)

    - by swalker
    Get ready for Fusion Applications Implementations Oracle University has scheduled the first Oracle Fusion Applications Implementation courses. To view see: Italy France The Netherlands UK If you can’t find an In Class event in a country near you, why don’t you try a Live Virtual Class? View the UK link above and check out the Location: Online. All courses can be booked via the websites. For more information, assistance in booking and scheduling requests contact your local Oracle University Service Desk. Stay Connected to Oracle University: LinkedIn OracleMix Twitter Facebook Google+

    Read the article

  • OpenJDK In The News: Oracle Outlines Roadmap for Java SE and JavaFX at JavaOne 2012

    - by $utils.escapeXML($entry.author)
    The OpenJDK Community continues to host the development of the reference implementation of Java SE 8. Weekly developer preview builds of JDK 8 continue to be available from jdk8.java.net.OpenJDK continues to thrive with contributions from Oracle, as well as other companies, researchers and individuals.The OpenJDK Web Site Terms of Use was recently updated to allow work on Java Specification Requests (JSRs) for Java SE to take place in the OpenJDK Community, alongside their corresponding reference implementations, so that specification leads can satisfy the new transparency requirements of the Java Community Process (JCP 2.8).“The recent decision by the Java SE 8 Expert Group to defer modularity to Java SE 9 will allow us to focus on the highly-anticipated Project Lambda, the Nashorn JavaScript engine, the new Date/Time API, and Type Annotations, along with numerous other performance, simplification, and usability enhancements,” said Georges Saab, vice president, Software Development, Java Platform Group at Oracle. “We are continuing to increase our communication and transparency by developing the reference implementation and the Oracle-led JSRs in the OpenJDK community.”Quotes taken from the 14th press release from Oracle mentioning OpenJDK, titled "Oracle Outlines Roadmap for Java SE and JavaFX at JavaOne 2012".

    Read the article

  • Java best practice Interface - subclasses and constants

    - by Taiko
    In the case where a couple of classes implements an interface, and those classes have a couple of constants in common (but no functions), were should I put this constant ? I've had this problem a couple of times. I have this interface : DataFromSensors that I use to hide the implementations of several sub classes like DataFromHeartRateMonitor DataFromGps etc... For some reason, those classes uses the same constants. And there's nowere else in the code were it is used. My question is, were should I put those constants ? Not in the interface, because it has nothing to do with my API Not in a static Constants class, because I'm trying to avoid those Not in a common abstract class, that would stand between the interface and the subclasses, because I have no functions in common, only a couple of constants (TIMEOUT_DURATION, UUID, those kind of things) I've read best practice for constants and interface to define constants but they don't really answer my question. Thanks !

    Read the article

  • When should code favour optimization over readability and ease-of-use?

    - by jmlane
    I am in the process of designing a small library, where one of my design goals is that the API should be as close to the domain language as possible. While working on the design, I've noticed that there are some cases in the code where a more intuitive, readable attribute/method call requires some functionally unnecessary encapsulation. Since the final product will not necessarily require high performance, I am unconcerned about making the decision to favour ease-of-use in my current project over the most efficient implementation of the code in question. I know not to assume readability and ease-of-use are paramount in all expected use-cases, such as when performance is required. I would like to know if there are more general reasons that argue for a design preferring more efficient implementations—even if only marginally so?

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Product Value Chain Management: How Oracle is Taking the Lead on Next Generation Enterprise PLM

    To manage growing product complexity and innovation challenges, Product Lifecycle Management solutions have become staple IT investments over the years. But as product information continues to span more and more functions inside the company and out, we've seen many customer PLM implementations adapt and expand to serve new needs in a fully connected world. We call this next level of PLM the Product Value Chain, an integrated business model that offers powerful new strategies for executives to collectively leverage PLM other industry leading Oracle applications to achieve further incremental value. In this Appcast, hear Terri Hiskey, Director PLM Product Marketing, and John Kelley, VP PLM Product Strategy, discuss Oracle's vision for next generation enterprise PLM: the Product Value Chain.

    Read the article

  • Can i use aac in an commercial app for free?

    - by Jason123
    I was wondering if i can use the aac codec in my commercial app for free (through lgpl ffmpeg). It says on the wiki: No licenses or payments are required to be able to stream or distribute content in AAC format.[36] This reason alone makes AAC a much more attractive format to distribute content than MP3, particularly for streaming content (such as Internet radio). However, a patent license is required for all manufacturers or developers of AAC codecs. For this reason free and open source software implementations such as FFmpeg and FAAC may be distributed in source form only, in order to avoid patent infringement. (See below under Products that support AAC, Software.) But the xSplit program had to cancel the AAC for free members because they have to pay royalties per person. Is this true (that you have to pay per each person that uses aac)? If you do have to pay, which company do you pay to and how does one apply?

    Read the article

  • When should an API favour optimization over readability and ease-of-use?

    - by jmlane
    I am in the process of designing a small library, where one of my design goals is to use as much of the native domain language as possible in the API. While doing so, I've noticed that there are some cases in the API outline where a more intuitive, readable attribute/method call requires some functionally unnecessary encapsulation. Since the final product will not necessarily require high performance, I am unconcerned about making the decision to favour ease-of-use in my current project over the most efficient implementation of the code in question. I know not to assume readability and ease-of-use are paramount in all expected use-cases, such as when performance is required. I would like to know if there are more general reasons that argue for an API design preferring (marginally) more efficient implementations?

    Read the article

  • ASP.NET Image to PDF with VB.Net

    This tutorial will discuss how to use the PDFSharp library on images. Specifically it will cover how to do simple JPG to PDF conversion using PDFSharp in VB.NET. This has a lot of applications in ASP.NET implementations such as converting a document online that contains either a pure image or both text and images.... Comcast? Business Class - Official Site Learn About Comcast Small Business Services. Best in Phone, TV & Internet.

    Read the article

  • Earliest use of Comments as Semantically Meaningful Things in a Program?

    - by Alan Storm
    In certain corners of the PHP meta-programming world, it's become fashionable to use PHPDoc comments as a mechanism for providing semantically meaningful information to a program. That is, other code will parse the doc blocks and do something significant with the information encoded in those comments. Doctrine's annotations and code generation are an example of this. What's the earliest (or some early) use of this technique? I have vague memories of some early java Design by Contract implementations doing similar things, but I'm not sure of those folks were inventing the technique, or if they got it from somewhere. Mainly asking so I can provide some historical context for PHP developers who haven't come across the technique before, and are distrustful of it because it seems a little crazy pants.

    Read the article

  • What's the term describing this system for generating user interfaces?

    - by mjfgates
    So, there's this idea, which you already know: Define the layout of your UI by creating a tree of panels. The leaf nodes on the tree are what we used to call 'controls' way back in the day-- the things that the user interacts with, radio buttons and listboxes and such. The internal nodes are mostly concerned with layout; this kind of panel stacks its child panels vertically, that kind puts its children into a grid, etc. It's COMMON. Most of the UI-generating systems I've seen in the past twenty years are implementations of this, and the ones that aren't borrow from it. What's the word for this idea?

    Read the article

  • Security Alert for CVE-2011-5035 Updated

    - by Eric P. Maurice
    Hi, this is Eric Maurice again.  Oracle has just updated the Security Alert for CVE-2011-5035 to announce the availability of additional fixes for products that were affected by this vulnerability through their use of the WebLogic Server and Oracle Container for J2EE components.  As explained in a previous blog entry, a number of programming language implementations and web servers were found vulnerable to hash table collision attacks.  This vulnerability is typically remotely exploitable without authentication, i.e., it may be exploited over a network without the need for a username and password.  If successfully exploited, malicious attackers can use this vulnerability to create denial of service conditions against the targeted system. A complete list of affected products and their versions, as well as instructions on how to obtain the fixes, are listed on the Security Alert Advisory.  Oracle highly recommends that customers apply these fixes as soon as possible.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >