Search Results

Search found 15466 results on 619 pages for 'oracle partitioning'.

Page 517/619 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • Custom Lookup Provider For NetBeans Platform CRUD Tutorial

    - by Geertjan
    For a long time I've been planning to rewrite the second part of the NetBeans Platform CRUD Application Tutorial to integrate the loosely coupled capabilities introduced in a seperate series of articles based on articles by Antonio Vieiro (a great series, by the way). Nothing like getting into the Lookup stuff right from the get go (rather than as an afterthought)! The question, of course, is how to integrate the loosely coupled capabilities in a logical way within that tutorial. Today I worked through the tutorial from scratch, up until the point where the prototype is completed, i.e., there's a JTextArea displaying data pulled from a database. That brought me to the place where I needed to be. In fact, as soon as the prototype is completed, i.e., the database connection has been shown to work, the whole story about Lookup.Provider and InstanceContent should be introduced, so that all the subsequent sections, i.e., everything within "Integrating CRUD Functionality" will be done by adding new capabilities to the Lookup.Provider. However, before I perform open heart surgery on that tutorial, I'd like to run the scenario by all those reading this blog who understand what I'm trying to do! (I.e., probably anyone who has read this far into this blog entry.) So, this is what I propose should happen and in this order: Point out the fact that right now the database access code is found directly within our TopComponent. Not good. Because you're mixing view code with data code and, ideally, the developers creating the user interface wouldn't need to know anything about the data access layer. Better to separate out the data access code into a separate class, within the CustomerLibrary module, i.e., far away from the module providing the user interface, with this content: public class CustomerDataAccess { public List<Customer> getAllCustomers() { return Persistence.createEntityManagerFactory("CustomerLibraryPU"). createEntityManager().createNamedQuery("Customer.findAll").getResultList(); } } Point out the fact that there is a concept of "Lookup" (which readers of the tutorial should know about since they should have followed the NetBeans Platform Quick Start), which is a registry into which objects can be published and to which other objects can be listening. In the same way as a TopComponent provides a Lookup, as demonstrated in the NetBeans Platform Quick Start, your own object can also provide a Lookup. So, therefore, let's provide a Lookup for Customer objects.  import org.openide.util.Lookup; import org.openide.util.lookup.AbstractLookup; import org.openide.util.lookup.InstanceContent; public class CustomerLookupProvider implements Lookup.Provider { private Lookup lookup; private InstanceContent instanceContent; public CustomerLookupProvider() { // Create an InstanceContent to hold capabilities... instanceContent = new InstanceContent(); // Create an AbstractLookup to expose the InstanceContent... lookup = new AbstractLookup(instanceContent); // Add a "Read" capability to the Lookup of the provider: //...to come... // Add a "Update" capability to the Lookup of the provider: //...to come... // Add a "Create" capability to the Lookup of the provider: //...to come... // Add a "Delete" capability to the Lookup of the provider: //...to come... } @Override public Lookup getLookup() { return lookup; } } Point out the fact that, in the same way as we can publish an object into the Lookup of a TopComponent, we can now also publish an object into the Lookup of our CustomerLookupProvider. Instead of publishing a String, as in the NetBeans Platform Quick Start, we'll publish an instance of our own type. And here is the type: public interface ReadCapability { public void read() throws Exception; } And here is an implementation of our type added to our Lookup: public class CustomerLookupProvider implements Lookup.Provider { private Set<Customer> customerSet; private Lookup lookup; private InstanceContent instanceContent; public CustomerLookupProvider() { customerSet = new HashSet<Customer>(); // Create an InstanceContent to hold capabilities... instanceContent = new InstanceContent(); // Create an AbstractLookup to expose the InstanceContent... lookup = new AbstractLookup(instanceContent); // Add a "Read" capability to the Lookup of the provider: instanceContent.add(new ReadCapability() { @Override public void read() throws Exception { ProgressHandle handle = ProgressHandleFactory.createHandle("Loading..."); handle.start(); customerSet.addAll(new CustomerDataAccess().getAllCustomers()); handle.finish(); } }); // Add a "Update" capability to the Lookup of the provider: //...to come... // Add a "Create" capability to the Lookup of the provider: //...to come... // Add a "Delete" capability to the Lookup of the provider: //...to come... } @Override public Lookup getLookup() { return lookup; } public Set<Customer> getCustomers() { return customerSet; } } Point out that we can now create a new instance of our Lookup (in some other module, so long as it has a dependency on the module providing the CustomerLookupProvider and the ReadCapability), retrieve the ReadCapability, and then do something with the customers that are returned, here in the rewritten constructor of the TopComponent, without needing to know anything about how the database access is actually achieved since that is hidden in the implementation of our type, above: public CustomerViewerTopComponent() { initComponents(); setName(Bundle.CTL_CustomerViewerTopComponent()); setToolTipText(Bundle.HINT_CustomerViewerTopComponent()); // EntityManager entityManager = Persistence.createEntityManagerFactory("CustomerLibraryPU").createEntityManager(); // Query query = entityManager.createNamedQuery("Customer.findAll"); // List<Customer> resultList = query.getResultList(); // for (Customer c : resultList) { // jTextArea1.append(c.getName() + " (" + c.getCity() + ")" + "\n"); // } CustomerLookupProvider lookup = new CustomerLookupProvider(); ReadCapability rc = lookup.getLookup().lookup(ReadCapability.class); try { rc.read(); for (Customer c : lookup.getCustomers()) { jTextArea1.append(c.getName() + " (" + c.getCity() + ")" + "\n"); } } catch (Exception ex) { Exceptions.printStackTrace(ex); } } Does the above make as much sense to others as it does to me, including the naming of the classes? Feedback would be appreciated! Then I'll integrate into the tutorial and do the same for the other sections, i.e., "Create", "Update", and "Delete". (By the way, of course, the tutorial ends up showing that, rather than using a JTextArea to display data, you can use Nodes and explorer views to do so.)

    Read the article

  • Morgan Stanley chooses Solaris 11 to run cloud file services

    - by Frederic Pariente
    At the EAKC2012 Conference last week in Edinburg, Robert Milkowski, Unix engineer at Morgan Stanley, presented on deploying OpenAFS on Solaris 11. It makes a great proofpoint on how ZFS and DTrace gives a definite advantage to Solaris over Linux to run AFS distributed file system services, the "cloud file system" as it calls it in his blog. Mike used ZFS to achieve a 2-3x compression ratio on data and greatly lower the TCA and TCO of the storage subsystem, and DTrace to root-cause scalability bottlenecks and improve performance. As future ideas, Mike is looking at leveraging more Solaris features like Zones, ZFS Dedup, SSD for ZFS, etc.

    Read the article

  • New ADF Design Paper Covering Task Flows

    - by Duncan Mills
    Just published to OTN today is a new paper that I've put together Task Flow Design Fundamentals. This paper collates a whole bunch of random thoughts about ADF Controller design that I've collected over the last couple of years. Hopefully this will be a useful aid to help you think about your task flow design in a more structured way.

    Read the article

  • Transient VO : Powerful J2EE Design Pattern

    - by Vijay Mohan
    We had a use-case wherein, the communication has to happen between regions residing under differenet taskfows. Essentially, they had a common set of parameters to be used. Initially, we resorted to the  use of pageFlowScope variables, but they are tightly coupled with the individual task flows. So, how the communication has to happen..?Some of the alternatives that we brainstormed into are - 1.usage of adf contextual event - This is a powerful feature indeed for such use-cases, but there is a considerable cost involved with it. So, before resorting to it, you have to make sure that you have good enough reason to use it.It actually does a server roundtrip and also the issue of an event and listening part to it is also something which requires your attention !!2.Use a transientVO with shared data control scope - with shared data control scope, the transient VO rows would be persistent across the task flows in your application. All you have to do is to create the attributes in the transientVO(prefereably with the same names - for the ease of conversion) and create some utility methods in VOImpl for creating row, updating row and deleting a row. You also have to make sure that the vo row is initialized per http request( this you can do in a bookmark method of your index.jspx - residing in adfc-config.xml), else the ui fields binded to the transient vo attributes won't render in UI.Hope, this helps and this should be a common use-case across apps.

    Read the article

  • SMART Technologies E-Business Suite Release 12 Success

    Get smart about E-Business Suite Release 12 implementation with this customer success story. Hear Mike Battistel, VP Information Systems – SMART Technologies, discuss why they selected E-Business Suite Release 12, the implementation process, what benefits they have gained and lessons learned along the way. SMART Technologies is both the industry pioneer and market-segment leader in easy-to-use interactive whiteboards and other group collaboration tools.

    Read the article

  • Games Localization: Cultural Points

    - by ultan o'broin
    Great article about localization considerations, this times in the games space. Well worth checking out. It's rare to see such all-encompassing articles about localization considerations aimed at designers. That's a shame. The industry assumes all these things are known. The evidence from practice is that they're not and also need constant reinforcement. We're not in the games space in enterprise applications yet. However, there may be a role for them in the training space but also in CRM, building relationships and contacts. Beyond the obvious considerations, check out the cultural aspects of games localization too. For example, Zygna's offerings, which you might have played on Facebook: Zynga, which can lay claim to the two most popular social games on Facebook - FarmVille and CityVille - has recently localized both games for international audiences, and while CityVille has seen only localization for European languages, FarmVille has been localized for China, which involved rebuilding the game from the ground up. This localization process involved taking into account cultural considerations including changing the color palette to be brighter and increasing the size of the farm plots, to appeal to Chinese aesthetics and cultural experience. All the more reason to conduct research in your target markets, worldwide.

    Read the article

  • Financial Statement Presentation Changes

    - by Theresa Hickman
    On March 10, 2010, FASB and IASB came to an agreement on financial statement presentation. They have been discussing changes for some time, such as displaying more lines items and moving certain line items from equity to the P&L, and now it seems they have finally come to a joint decision and put it in writing. I recently learned that there will be a trend to book nothing to equity and to convey everything in the P&L to take away any facility for companies to hide losses from its shareholders, investors, etc. (I'm exaggerating when I say book nothing to equity. Obviously, those items that already live there, such as stocks and dividends, would stay there.) But accounts like your CTA (Cumulative Translation Adjustment account) used to plug the gain or loss from equity translation would move from the equity section to your expense section. The rationale is that when you run translation, you're doing so for a subsidiary that you own, or simply put, it's a foriegn investment. Thus, the gains/losses of that foriegn investment should be itemized on your P&L and not buried in equity. The FASB will include changes in financial statement presentation in its Exposure Draft that is planned for issuance at the end of April 2010. Companies will be required to adopt the financial statement presentation provisions retroactively. Yes, that means companies will need to apply these changes to previously issued financial statements. The FASB Summary of Board Decisions can be found at "fasb.org".

    Read the article

  • My integer overfloweth

    - by darcy
    While certain classes like java.lang.Integer and java.lang.Math have been in the platform since the beginning, that doesn't mean there aren't more enhancements to be made in such places! For example, earlier in JDK 8, library support was added for unsigned integer arithmetic. More recently, my colleague Roger Riggs pushed a changeset to support integer overflow, that is, to provide methods which throw an ArithmeticException on overflow instead of returning a wrapped result. Besides being helpful for various programming tasks in Java, methods like the those for integer overflow can be used to implement runtimes supporting other languages, as has been requested at a past JVM language summit. This year's language summit is coming up in July and I hope to get some additional suggestions there for helpful library additions as part of the general discussions of the JVM and Java libraries as a platform.

    Read the article

  • New JavaScript Editor

    - by Petr
    I did not write a blog post here for a few weeks. I think the last my post was  about releasing NetBeans 7.1 in the beginning of January. The reason is not that I would change the job:), but that I have concentrated on new JavaScript support/editor. The new JavaScript editor is written basically from scratch. The answer for the question "Why from beginning again, why do you just improve the old one?" is not easy and the decision has more aspects. One of the main reasons is that the old support was written 4 years ago and the architecture is limited. Also during the time, the APIs were changed and it was very hard to keep the editor up to date. Also there is a license issue etc. In short, it is time to rewrite the old JS editor.  We build up strong community about the PHP support in NetBeans and because many PHP developers also write JavaScript code I would like to ask you for a help. There is a continual PHP build with the new JavaScript support. You can download the result of the builds here. It's a zip file. You can unzip the file anywhere, where you want. I recommend to run the build with the new userdir, to avoid damaging your current userdir. It shouldn't happened, but just to be sure:). You can achieve this through the switch --userdir. So start the unzipped file from command line from the folder, where you unzipped it, can be done with this command on unix: bin/netbeans.sh --userdir /path/to/new/userdir and on windows: bin\netbeans.exe --userdir D:\path\to\new\userdir For the developers who use continual php build already, it's well known. There is also full IDE build with the new JavaScript support for people, who need more than only PHP support.  Because the builds with the new JavaScript editor is created from a branch, there are not nightly builds available. They will be, when we merge the branch to the trunk, but so far we have to work only with the mentioned continual build. We will merge our branch after branching NetBeans 7.2 from trunk. This is also answer for the question, what release of NetBeans will contain the new JS support. It should be the release after NetBeans 7.2. I'm asking you whether you could play with the builds or better, could work in the builds with new JavaScript support and tell us every issue that you run in. It can be everything what doesn't fit you, something doesn't work as you expected, something is slow, you want change the behaviour of a feature etc. Your input / comments are very important for us and it will help us to achieve the new JavaScript support that you need.  The best way how to communicate issues is through our Bugzilla, because it is simple to track them. Sure you can write comment here:), but still I prefer Bugzilla for any issue. You can click here (you should be already log in Bugzilla), a form for the new JavaScript issue is opened, with pre-filled component Editor and NO72 keyword. I will write about the single features later, but now I will mentioned a few features that should work in better way than in the old support.  Syntactic and semantic colouring Navigator Mark Occurrences and GoTo Declaration  Code Completion Code Completion is invoked through keyboard shortcut CTRL+SPACE. The first invocation offers items that are found through a source model. Almost all editor features are based on the model, that is build from source code. There is a lot of work on the model yet, but it should offer better results. When the pop up window with code completion items is open and you press CTRL+SPACE again, then the code completion offers all elements that are in the project. In the pictures all elements that starts with letter 't'. Formatter with many options and more :) A few features are not still implemented that are supported in the old JavaScript support (for example jQuery support), but we are adding this features ASAP.

    Read the article

  • SSAS: Utility to check you have the correct data types and sizes in your cube definition

    - by DrJohn
    This blog describes a tool I developed which allows you to compare the data types and data sizes found in the cube’s data source view with the data types/sizes of the corresponding dimensional attribute.  Why is this important?  Well when creating named queries in a cube’s data source view, it is often necessary to use the SQL CAST or CONVERT operation to change the data type to something more appropriate for SSAS.  This is particularly important when your cube is based on an Oracle data source or using custom SQL queries rather than views in the relational database.   The problem with BIDS is that if you change the underlying SQL query, then the size of the data type in the dimension does not update automatically.  This then causes problems during deployment whereby processing the dimension fails because the data in the relational database is wider than that allowed by the dimensional attribute. In particular, if you use some string manipulation functions provided by SQL Server or Oracle in your queries, you may find that the 10 character string you expect suddenly turns into an 8,000 character monster.  For example, the SQL Server function REPLACE returns column with a width of 8,000 characters.  So if you use this function in the named query in your DSV, you will get a column width of 8,000 characters.  Although the Oracle REPLACE function is far more intelligent, the generated column size could still be way bigger than the maximum length of the data actually in the field. Now this may not be a problem when prototyping, but in your production cubes you really should clean up this kind of thing as these massive strings will add to processing times and storage space. Similarly, you do not want to forget to change the size of the dimension attribute if your database columns increase in size. Introducing CheckCubeDataTypes Utiltity The CheckCubeDataTypes application extracts all the data types and data sizes for all attributes in the cube and compares them to the data types and data sizes in the cube’s data source view.  It then generates an Excel CSV file which contains all this metadata along with a flag indicating if there is a mismatch between the DSV and the dimensional attribute.  Note that the app not only checks all the attribute keys but also the name and value columns for each attribute. Another benefit of having the metadata held in a CSV text file format is that you can place the file under source code control.  This allows you to compare the metadata of the previous cube release with your new release to highlight problems introduced by new development. You can download the C# source code from here: CheckCubeDataTypes.zip A typical example of the output Excel CSV file is shown below - note that the last column shows a data size mismatch by TRUE appearing in the column

    Read the article

  • JMSContext, @JMSDestinationDefintion, DefaultJMSConnectionFactory with simplified JMS API: TOTD #213

    - by arungupta
    "What's New in JMS 2.0" Part 1 and Part 2 provide comprehensive introduction to new messaging features introduced in JMS 2.0. The biggest improvement in JMS 2.0 is introduction of the "new simplified API". This was explained in the Java EE 7 Launch Technical Keynote. You can watch a complete replay here. Sending and Receiving a JMS message using JMS 1.1 requires lot of boilerplate code, primarily because the API was designed 10+ years ago. Here is a code that shows how to send a message using JMS 1.1 API: @Statelesspublic class ClassicMessageSender { @Resource(lookup = "java:comp/DefaultJMSConnectionFactory") ConnectionFactory connectionFactory; @Resource(mappedName = "java:global/jms/myQueue") Queue demoQueue; public void sendMessage(String payload) { Connection connection = null; try { connection = connectionFactory.createConnection(); connection.start(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); MessageProducer messageProducer = session.createProducer(demoQueue); TextMessage textMessage = session.createTextMessage(payload); messageProducer.send(textMessage); } catch (JMSException ex) { ex.printStackTrace(); } finally { if (connection != null) { try { connection.close(); } catch (JMSException ex) { ex.printStackTrace(); } } } }} There are several issues with this code: A JMS ConnectionFactory needs to be created in a application server-specific way before this application can run. Application-specific destination needs to be created in an application server-specific way before this application can run. Several intermediate objects need to be created to honor the JMS 1.1 API, e.g. ConnectionFactory -> Connection -> Session -> MessageProducer -> TextMessage. Everything is a checked exception and so try/catch block must be specified. Connection need to be explicitly started and closed, and that bloats even the finally block. The new JMS 2.0 simplified API code looks like: @Statelesspublic class SimplifiedMessageSender { @Inject JMSContext context; @Resource(mappedName="java:global/jms/myQueue") Queue myQueue; public void sendMessage(String message) { context.createProducer().send(myQueue, message); }} The code is significantly improved from the previous version in the following ways: The JMSContext interface combines in a single object the functionality of both the Connection and the Session in the earlier JMS APIs.  You can obtain a JMSContext object by simply injecting it with the @Inject annotation.  No need to explicitly specify a ConnectionFactory. A default ConnectionFactory under the JNDI name of java:comp/DefaultJMSConnectionFactory is used if no explicit ConnectionFactory is specified. The destination can be easily created using newly introduced @JMSDestinationDefinition as: @JMSDestinationDefinition(name = "java:global/jms/myQueue",        interfaceName = "javax.jms.Queue") It can be specified on any Java EE component and the destination is created during deployment. JMSContext, Session, Connection, JMSProducer and JMSConsumer objects are now AutoCloseable. This means that these resources are automatically closed when they go out of scope. This also obviates the need to explicitly start the connection JMSException is now a runtime exception. Method chaining on JMSProducers allows to use builder patterns. No need to create separate Message object, you can specify the message body as an argument to the send() method instead. Want to try this code ? Download source code! Download Java EE 7 SDK and install. Start GlassFish: bin/asadmin start-domain Build the WAR (in the unzipped source code directory): mvn package Deploy the WAR: bin/asadmin deploy <source-code>/jms/target/jms-1.0-SNAPSHOT.war And access the application at http://localhost:8080/jms-1.0-SNAPSHOT/index.jsp to send and receive a message using classic and simplified API. A replay of JMS 2.0 session from Java EE 7 Launch Webinar provides complete details on what's new in this specification: Enjoy!

    Read the article

  • The Social Business Thought Leaders - Esteban Kolsky

    - by kellsey.ruppel
    Esteban Kolsky's presentation at the Social Business Forum 2012 was meaningfully titled “Everything you wanted to know about Customer Service using Social but had no one to ask”.  A recent survey by ThinkJar, Kolsky’s independent analyst firm, reported how more than 90% of the interviewed companies consider embracing social channels in customer service the right thing to do for the business and its customers. These numbers shouldn't be too surprising given the popularity of services such as Twitter and Facebook (59% and 60% respectively in the survey) among organizations, the power consumers are gaining online and the 40% preference they have to escalate issues on social services. Moreover, both large enterprises and small businesses are realizing how customer retention is cheaper and easier than customer acquisition. Many companies are looking at communities and social networks as an opportunity to drive loyalty, satisfaction and word of mouth. However, in this early phase the way they are preparing to launch social support appears to be lacking at best: 66% have no defined processes for customer service over social channels 68% were not able to estimate ROI before deploying social in customer service Only 8% found the expected ROI Most of the projects are stuck in the pilot or testing phase In his interview for the Social Business Thought-Leaders, Esteban discusses how to turn social media hype in business gains by touching upon some of the hottest topics organizations face when approaching social support: How to go from social media monitoring to actionable insights How Social CRM should be best positioned in regard to traditional CRM The importance of integrating social data to transactional data  Conversations with customer service organizations points to 2012 as the year of "understanding what social means for supporting customers". Will 2013 be the year it all becomes reality? We invite you to listen to Esteban Kolsky's interview to understand how to most effectively develop cross-channel strategies that include social channels and improve both customer satisfaction and the overall customer experience.

    Read the article

  • Developer Preview available for the Java Access Bridge is now included in Java SE 7 Update 6

    - by Ragini Prasad
    The Java Access Bridge product is now being included with Java SE 7u6. Manual installation of the Java Access Bridge will no longer be required. All Access Bridge files will be automatically installed by the JRE and the JDK.             The Developer Preview for this feature is now available and can be downloaded from http://jdk7.java.net/download.html.            By default, the Java Access Bridge is disabled. In order to use the Java Access Bridge, enable it using the steps mentioned below and test your applications for accessibility.             Enable the Java Access Bridge: Use one of these mechanism:             1. Ease Of Access control panel.     On Windows Vista and later the Java Access Bridge can be enabled     from Ease of Access Center.     Select "Use the computer without a display". In "Other programs     installed" section , select the check box to     "Enable Java Access Bridge" and apply. 2. Or run the following command in the Command Window.     %JRE_HOME%\bin\jabswitch -enable Note: You must restart your Assistive Technology software and Java application after enabling the bridge.             Test the Java Access Bridge: 1. Enable the Java Access Bridge as described above. 2. Run an Assistive Technology that supports the Java Access Bridge. 3. Run a Java application. Ensure that the Assistive Technology  reads    the values of your application. Disable the Java Access Bridge:             Run the following command from the Command Window.     %JRE_HOME%\bin\jabswitch -disable                 Note: The Ease Of Access control panel cannot be used to disable the bridge. You must use jabswitch from the Command window to disable the Java Access Bridge.

    Read the article

  • Java Embedded @ JavaOne

    - by sasa
    2012?9?30??10?4?????????????????JavaOne????????????????????????????????Java?????????????????????????????JavaOne???10?3??4??????????????????????????·????????Java Embedded @ JavaOne??????????? JavaOne???????????JavaOne Embedded @ JavaOne??????????????????????????????????????Java SE Embedded?????????????????????????Java??????????????????????????? ????????????Java Embedded @ JavaOne??????9?7???$595?9?28???$795??????$995????????????????????JavaOne??????????????????100??????????

    Read the article

  • How to get a Sun Ray to load a firmware from elsewhere

    - by vdiozguy
    I run a Sun Ray/VDI demo environment internally within the company - and because it's not a public service, I need to tell my Sun Rays to connect to it directly so that I don't get redirected to the corporate servers. To get any new Sun Ray to connect to *my* setup I usually pull out my laptop so that the Sun Ray can load the new version of the F/W along with the permission to pull up the management GUI via STOP-S.But there is a better way if you have another Sun Ray server handy:1) allow your Sun Ray to connect to the default corporate server2) log in to a "regular" session, that is a Solaris or Linux desktop on the Sun Ray server itself3) in a terminal, utswitch to your server (/opt/SUNWut/bin/utswitch -h myserver)4) again, login to a regular session there5) in a terminal,  issue "/opt/SUNWut/lib/utload -S myserver -w"6) Watch your firmware load and wait7) the Sun Ray will reboot and connect to the first server again. Repeat steps 2-48) issue "/opt/SUNWut/lib/utload -S myserver -f SunRay.enableGUI"9) Press STOP-S and be merryNOTE: I'm sure there is even yet a better way - this is totally unsupported, most likely a figment of my imagination. In any case, this post will self-destruct in BOOM.

    Read the article

  • PHP 5.4 Support: Traits

    - by Ondrej Brejla
    Hi all! Today we would like to intorduce you another new PHP 5.4 feature for NetBeans 7.2 which is called Traits. Note: All PHP 5.4 features are available in your projects after setting Project Properties -> Sources -> PHP Version to PHP 5.4 value, or after choosing the same value during a PHP Project creation (in New Project Wizard). If you don't know, what Traits are, just look at the official documentation, or RFC. So what is that exact Trait support in NetBeans? Syntax is recognized correctly and code completion offers declared, inherited stuff from used traits. Note: Just one thing is not supported yet - resolving name conflicts and aliasing of method names (it means that you will not have these "virtual" names in your code completion). We would like to implement it in some next NetBeans release. Sorry for any inconvenience. That's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (component php, subcomponent Editor). Thanks.

    Read the article

  • Discovering Assets on Multiple Subnets

    - by Owen Allen
    I saw this question recently, about making discovery work: "I am attempting to discover a group of ILOMs which seem to be on a different subnet than their hosts. When I try to discover the ILOMs, Ops Center can't find them. Is it a requirement that the ILOM and its host should be on the same subnet?" The short answer is no, related assets don't have to be on the same subnet for you to discover them. The correlation between related assets doesn't depend on them being on the same subnet. What's likely going on, if you're running into similar issues, is that the subnet where you're trying to discover systems needs to be associated with a Proxy Controller that can reach it. You can go here to see how to associate a network with a Proxy Controller. Once the network is associated with a Proxy Controller, you should be able to discover and manage assets on it.

    Read the article

  • GP11.1

    - by user13334066
    It's the Assen round of the 2011 motogp season, and Ducati have launched their GP11.1. The Ducati's front end woes were quite efficiently highlighted throughout the 2010 season, with both Casey and Nicky regularly visiting the gravel traps. Now the question is: was it really a front end issue. What's most probable is: the GP10 never had a front end issue. It was the rear that was out. So what did Stoner's team do? They came with setup changes that sorted out the rear end, while transferring the problem to the front. And Casey has this brilliant ability to push beyond the limits of a vague and erratic front end...and naturally the real problem lay hidden. Like Kevin Cameron said: in human nature, our strengths are our weaknesses. Casey's pure speed came at a lack of fine machinery feel, which ultimately took the Ducati in a wrong development direction.

    Read the article

  • Transparent Technology from Amazon

    - by David Dorf
    Amazon has been making some interesting moves again, this time in the augmented humanity area.  Augmented humanity is about helping humans overcome their shortcomings using technology.  Putting a powerful smartphone in your pocket helps you in many ways like navigating streets, communicating with far off friends, and accessing information.  But the interface for smartphones is somewhat limiting and unnatural, so companies have been looking for ways to make the technology more transparent and therefore easier to use. When Apple helped us drop the stylus, we took a giant leap forward in simplicity.  Using touchscreens with intuitive gestures was part of the iPhone's original appeal.  People don't want to know that technology is there -- they just want the benefits.  So what's the next leap beyond the touchscreen to make smartphones even easier to use? Two natural ways we interact with the world around us is by using sight and voice.  Google and Apple have been using both in their mobile platforms for limited uses cases.  Nobody actually wants to type a text message, so why not just speak it?  Any if you want more information about a book, why not just snap a picture of the cover?  That's much more accurate than trying to key the title and/or author. So what's Amazon been doing?  First, Amazon released a new iPhone app called Flow that allows iPhone users to see information about products in context.  Yes, its an augmented reality app that uses the phone's camera to view products, and overlays data about the products on the screen.  For the most part it requires the barcode to be visible to correctly identify the product, but I believe it can also recognize certain logos as well.  Download the app and try it out but don't expect perfection.  Its good enough to demonstrate the concept, but its far from accurate enough.  (MobileBeat did a pretty good review.)  Extrapolate to the future and we might just have a heads-up display in our eyeglasses. The second interesting area is voice response, for which Siri is getting lots of attention.  Amazon may have purchased a voice recognition company called Yap, although the deal is not confirmed.  But it would make perfect sense, especially with the Kindle Fire in Amazon's lineup. I believe over the next 3-5 years the way in which we interact with smartphones will mature, and they will become more transparent yet more important to our daily lives.  This will, of course, impact the way we shop, making information more readily accessible than it already is.  Amazon seems to be positioning itself to be at the forefront of this trend, so we should be watching them carefully.

    Read the article

  • eFX on NetBeans Platform at Silicon Valley JavaFX User Group

    - by Geertjan
    Below you can watch (in addition to seeing Steve Chin and Ben Evans) Sven Reimers presenting eFX, a JavaFX application framework on the NetBeans Platform, yesterday at the Silicon Valley JavaFX User Group. While watching, you'll learn quite a few things about the NetBeans Platform, at the same time. In the end, you see a VisualVM clone written in JavaFX on the NetBeans Platform. Sven will also talk on this topic at NetBeans Day and during his sessions at JavaOne.

    Read the article

  • Serial plans: Threshold / Parallel_degree_limit = 1

    - by jean-pierre.dijcks
    As a very short follow up on the previous post. So here is some more on getting a serial plan and why that happens Another reason - compared to the auto DOP is not on as we looked at in the earlier post - and often more prevalent to get a serial plan is if the plan simply does not take long enough to consider a parallel path. The resulting plan and note looks like this (note that this is a serial plan!): explain plan for select count(1) from sales; SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 672559287 -------------------------------------------------------------------------------------- | Id  | Operation            | Name  | Rows  | Cost (%CPU)| Time     | Pstart| Pstop | -------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- |   0 | SELECT STATEMENT     |       |     1 |     5   (0)| 00:00:01 |       |     | |   1 |  SORT AGGREGATE      |       |     1 |            |          |       |     | |   2 |   PARTITION RANGE ALL|       |   960 |     5   (0)| 00:00:01 |     1 |  16 | |   3 |    TABLE ACCESS FULL | SALES |   960 |     5   (0)| 00:00:01 |     1 |  16 | Note -----    - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold 14 rows selected. The parallel threshold is referring to parallel_min_time_threshold and since I did not change the default (10s) the plan is not being considered for a parallel degree computation and is therefore staying with the serial execution. Now we go into the land of crazy: Assume I do want this DOP=1 to happen, I could set the parameter in the init.ora, but to highlight it in this case I changed it on the session: alter session set parallel_degree_limit = 1; The result I get is: ERROR: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00096: invalid value 1 for parameter parallel_degree_limit, must be from among CPU IO AUTO INTEGER>=2 Which of course makes perfect sense...

    Read the article

  • Explaining Explain Plan Notes for Auto DOP

    - by jean-pierre.dijcks
    I've recently gotten some questions around "why do I not see a parallel plan" while Auto DOP is on (I think)...? It is probably worthwhile to quickly go over some of the ways to find out what Auto DOP was thinking. In general, there is no need to go tracing sessions and look under the hood. The thing to start with is to do an explain plan on your statement and to look at the parameter settings on the system. Parameter Settings to Look At First and foremost, make sure that parallel_degree_policy = AUTO. If you have that parameter set to LIMITED you will not have queuing and we will only do the auto magic if your objects are set to default parallel (so no degree specified). Next you want to look at the value of parallel_degree_limit. It is typically set to CPU, which in default settings equates to the Default DOP of the system. If you are testing Auto DOP itself and the impact it has on performance you may want to leave it at this CPU setting. If you are running concurrent statements you may want to give this some more thoughts. See here for more information. In general, do stick with either CPU or with a specific number. For now avoid the IO setting as I've seen some mixed results with that... In 11.2.0.2 you should also check that IO Calibrate has been run. Best to simply do a: SQL> select * from V$IO_CALIBRATION_STATUS; STATUS        CALIBRATION_TIME ------------- ---------------------------------------------------------------- READY         04-JAN-11 10.04.13.104 AM You should see that your IO Calibrate is READY and therefore Auto DOP is ready. In any case, if you did not run the IO Calibrate step you will get the following note in the explain plan: Note -----    - automatic DOP: skipped because of IO calibrate statistics are missing One more note on calibrate_io, if you do not have asynchronous IO enabled you will see:  ERROR at line 1: ORA-56708: Could not find any datafiles with asynchronous i/o capability ORA-06512: at "SYS.DBMS_RMIN", line 463 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296 ORA-06512: at line 7 While this is changed in some fixes to the calibrate procedure, you should really consider switching asynchronous IO on for your data warehouse. Explain Plan Explanation To see the notes that are shown and explained here (and the above little snippet ) you can use a simple explain plan mechanism. There should  be no need to add +parallel etc. explain plan for <statement> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); Auto DOP The note structure displaying why Auto DOP did not work (with the exception noted above on IO Calibrate) is like this: Automatic degree of parallelism is disabled: <reason> These are the reason codes: Parameter -  parallel_degree_policy = manual which will not allow Auto DOP to kick in  Hint - One of the following hints are used NOPARALLEL, PARALLEL(1), PARALLEL(MANUAL) Outline - A SQL outline of an older version (before 11.2) is used SQL property restriction - The statement type does not allow for parallel processing Rule-based mode - Instead of the Cost Based Optimizer the system is using the RBO Recursive SQL statement - The statement type does not allow for parallel processing pq disabled/pdml disabled/pddl disabled - For some reason (alter session?) parallelism is disabled Limited mode but no parallel objects referenced - your parallel_degree_policy = LIMITED and no objects in the statement are decorated with the default PARALLEL degree. In most cases all objects have a specific degree in which case Auto DOP will honor that degree. Parallel Degree Limited When Auto DOP does it works you may see the cap you imposed with parallel_degree_limit showing up in the note section of the explain plan: Note -----    - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit This is an obvious indication that your are being capped for this statement. There is one quite interesting one that happens when you are being capped at DOP = 1. First of you get a serial plan and the note changes slightly in that it does not indicate it is being capped (we hope to update the note at some point in time to be more specific). It right now looks like this: Note -----    - automatic DOP: Computed Degree of Parallelism is 1 Dynamic Sampling With 11.2.0.2 you will start seeing another interesting change in parallel plans, and since we are talking about the note section here, I figured we throw this in for good measure. If we deem the parallel (!) statement complex enough, we will enact dynamic sampling on your query. This happens as long as you did not change the default for dynamic sampling on the system. The note looks like this: Note ----- - dynamic sampling used for this statement (level=5)

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >