Search Results

Search found 17227 results on 690 pages for 'oracle hcm cloud'.

Page 555/690 | < Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >

  • OBIEE 11g recommended patch sets

    - by THE
     Martin has busied himself to combine the recommended patch sets for OBIEE 11g into one single useful KM note.(This one contains the recommendations for 11.1.1.5 as well as those for 11.1.1.6) OBIEE 11g: Required and Recommended Patches and Patch Sets (Doc ID 1488475.1) So if you are looking for update/patch information for your OBIEE installation - this is most likely a useful stop. And as patching is an ongoing process you may want to bookmark this KM doc, as I am sure Martin will keep this current as new patches come out. Oh - and if you are looking for upgrade information from 11.1.1.5 to 11.1.1.6, KM Doc ID 1434253.1 might just be the thing you are looking for.

    Read the article

  • Valentine's Day in sunny Ottawa, Ontario

    - by Roy F. Swonger
    Birds may fly south for the winter, but I will be heading to Ottawa, Ontario next week for a workshop on 15-FEB. I'm pretty used to winter because I grew up in the snow belt southeast of Buffalo, NY -- and this winter in New England is helping me get prepared as well. For reference, the big lump is my gas grill. Anyway, I'm looking forward to a good workshop and to visiting Ottawa for my first time. If you're in the Ottawa/Gatineau area and would like to attend, please contact your account team for more information.

    Read the article

  • Enterprise Service Bus (ESB): Important architectural piece to a SOA or is it just vendor hype?

    Is an Enterprise Service Bus (ESB) an important architectural piece to a Service-Oriented Architecture (SOA), or is it just vendor hype in order to sell a particular product such as SOA-in-a-box? According to IBM.com, an ESB is a flexible connectivity infrastructure for integrating applications and services; it offers a flexible and manageable approach to service-oriented architecture implementation. With this being said, it is my personal belief that ESBs are an important architectural piece to any SOA. Additionally, generic design patterns have been created around the integration of web services in to ESB regardless of any vendor. ESB design patterns, according to Philip Hartman, can be classified in to the following categories: Interaction Patterns: Enable service interaction points to send and/or receive messages from the bus Mediation Patterns: Enable the altering of message exchanges Deployment Patterns: Support solution deployment into a federated infrastructure Examples of Interaction Patterns: One-Way Message Synchronous Interaction Asynchronous Interaction Asynchronous Interaction with Timeout Asynchronous Interaction with a Notification Timer One Request, Multiple Responses One Request, One of Two Possible Responses One Request, a Mandatory Response, and an Optional Response Partial Processing Multiple Application Interactions Benefits of the Mediation Pattern: Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently Design an intermediary to decouple many peers Promote the many-to-many relationships between interacting peers to “full object status” Examples of Interaction Patterns: Global ESB: Services share a single namespace and all service providers are visible to every service requester across an entire network Directly Connected ESB: Global service registry that enables independent ESB installations to be visible Brokered ESB: Bridges services that are reluctant to expose requesters or providers to ESBs in other domains Federated ESB: Service consumers and providers connect to the master or to a dependent ESB to access services throughout the network References: Mediator Design Pattern. (2011). Retrieved 2011, from SourceMaking.com: http://sourcemaking.com/design_patterns/mediator Hartman, P. (2006, 24 1). ESB Patterns that "Click". Retrieved 2011, from The Art and Science of Being an IT Architect: http://artsciita.blogspot.com/2006/01/esb-patterns-that-click.html IBM. (2011). WebSphere DataPower XC10 Appliance Version 2.0. Retrieved 2011, from IBM.com: http://publib.boulder.ibm.com/infocenter/wdpxc/v2r0/index.jsp?topic=%2Fcom.ibm.websphere.help.glossary.doc%2Ftopics%2Fglossary.html Oracle. (2005). 12 Interaction Patterns. Retrieved 2011, from Oracle® BPEL Process Manager Developer's Guide: http://docs.oracle.com/cd/B31017_01/integrate.1013/b28981/interact.htm#BABHHEHD

    Read the article

  • New Java EE/GlassFish Testimonial

    - by reza_rahman
    As you may be aware, we have been making a concerted effort to ask successful Java EE/GlassFish adopters to come forward with their stories. A number of such stories were shared at this year's GlassFish Community event at JavaOne. In addition to Adam Bien's testimonial (which we posted earlier), another story that really stands out is the one from Stephan Janssen. Stephan is one of the main organizers of Devoxx and the webmaster of the popular Parleys e-learning platform. Parleys, which won the Duke's Choice award this year, runs on GlassFish as does the Devoxx CFP/registration website. Stephan's story is particularly interesting because he talks about his reasons and experience of moving from Tomcat to GlassFish and from Spring to Java EE. See what Stephan had to say here.

    Read the article

  • New Analytic settings for the new code

    - by Steve Tunstall
    If you have upgraded to the new 2011.1.3.0 code, you may find some very useful settings for the Analytics. If you didn't already know, the analytic datasets have the potential to fill up your OS hard drives. The more datasets you use and create, that faster this can happen. Since they take a measurement every second, forever, some of these metrics can get in the multiple GB size in a matter of weeks. The traditional 'fix' was that you had to go into Analytics -> Datasets about once a month and clean up the largest datasets. You did this by deleting them. Ouch. Now you lost all of that historical data that you might have wanted to check out many months from now. Or, you had to export each metric individually to a CSV file first. Not very easy or fun. You could also suspend a dataset, and have it not collect data at all. Well, that fixed the problem, didn't it? of course you now had no data to go look at. Hmmmm.... All of this is no longer a concern. Check out the new Settings tab under Analytics... Now, I can tell the ZFSSA to keep every second of data for, say, 2 weeks, and then average those 60 seconds of each minute into a single 'minute' value. I can go even further and ask it to average those 60 minutes of data into a single 'hour' value.  This allows me to effectively shrink my older datasets by a factor of 1/3600 !!! Very cool. I can now allow my datasets to go forever, and really never have to worry about them filling up my OS drives. That's great going forward, but what about those huge datasets you already have? No problem. Another new feature in 2011.1.3.0 is the ability to shrink the older datasets in the same way. Check this out. I have here a dataset called "Disk: I/O opps per second" that is about 6.32M on disk (You need not worry so much about the "In Core" value, as that is in RAM, and it fluctuates all the time. Once you stop viewing a particular metric, you will see that shrink over time, just relax).  When one clicks on the trash can icon to the right of the dataset, it used to delete the whole thing, and you would have to re-create it from scratch to get the data collecting again. Now, however, it gives you this prompt: As you can see, this allows you to once again shrink the dataset by averaging the second data into minutes or hours. Here is my new dataset size after I do this. So it shrank from 6.32MB down to 2.87MB, but i can still see my metrics going back to the time I began the dataset. Now, you do understand that once you do this, as you look back in time to the minute or hour data metrics, that you are going to see much larger time values, right? You will need to decide what size of granularity you can live with, and for how long. Check this out. Here is my Disk: Percent utilized from 5-21-2012 2:42 pm to 4:22 pm: After I went through the delete process to change everything older than 1 week to "Minutes", the same date and time looks like this: Just understand what this will do and how you want to use it. Right now, I'm thinking of keeping the last 6 weeks of data as "seconds", and then the last 3 months as "Minutes", and then "Hours" forever after that. I'll check back in six months and see how the sizes look. Steve 

    Read the article

  • Using the full tty real estate with sqlplus

    - by katsumii
    I believe 'rlwrap' is widely used for adding 'sqlplus' the history function and command line editing. 'rlwrap' has other functions and here's my kludgy alias to force sqlplus to use the full real estate of yourtty. Be it PuTTy session from Windows or Linux gnome terminal. $ declare -f sqlplus sqlplus () { PRE_TEXT=$(resize |sed -n "1s/COLUMNS=/set linesize /p;2s/LINES=/set pagesize /p"|while read line; do printf "%s \ " "$line";done); if [ -z "$1" ]; then rlwrap -m -P "$PRE_TEXT" sqlplus /nolog; else if ! echo $1 | grep '^-' > /dev/null; then rlwrap -m -P "$PRE_TEXT connect $*" sqlplus /nolog; else command sqlplus $*; fi; fi }

    Read the article

  • 7-minute Community GlassFish Clustering Screencast

    - by alexismp
    Community member Faissal has recently put up a 7-minute screencast of an un-edited setup of a GlassFish cluster. His clustering setup spans across a mac and a virtualized Ubuntu host. It can probably be further simplified using the SSH provisioning feature (asadmin install-node) to avoid logging into remote machines. You may remember John's GlassFish clustering in under 10 minutes, a very successful video which was based on version 2. Since that we've put out a number of demos on YouTube for our 3.1.x versions including a full webinar replay.

    Read the article

  • SQL installation scripts for WebCenter Content 11g

    - by KJH
    As part of the installation of WebCenter Content 11g (UCM or URM), one of the main functions is to run the Repository Creation Utility (RCU) to establish the database schema and tables.   This is pretty helpful because it runs all the scripts you need to have without having to manually set anything up in the database.   In UCM 10g and earlier, the installation  itself would establish the database tables if you wanted it to.  Otherwise, the SQL scripts were available to be run independently ahead of time.  For DBAs who wanted to understand what was being done to the database for the application, this was helpful for them.  But in 11g, that is all masked now in RCU.  You don't get to see the scripts at all as part of it's establishing the tables.  But if you comb through the directories for RCU, you can track them down.  They are in the  /rcuHome/rcu/integration/contentserver11/sql/ directories.  And to understand the order in which they are run, you can open up the /rcuHome/rcu/integration/contentserver11/contentserver11.xml file and see how they are run there.  The order in which they are run are: contentserverrole.sql contentserveruser.sql intradoc.sql workflow.sql formats.sql users.sql default.sql contentprocedures.sql  If you are installing WebCenter Records (URM), it will run some additional scripts between the formats.sql and users.sql : MetadataSet.sql UIEnhancements.sql RecordsManagement.sql RecordsManagement_default.sql ClassifiedEnhancements.sql ClassifiedEnhancements_default.sql In addition to the scripts being available within the RCU install directories, they are also available from within the Content Server UI.  If you go to Administration -> DataStoreDesign SQL Generation, this page can allow you to download these various SQL scripts.    From here, you can select your particular database type and which components to include.  Several components make changes dynamically to the database when they are enabled, so these scripts give you a way to inspect what is being run during that startup time.  Once selected, click Generate and you now can either view or download the scripts from the Actions menu. DISCLAIMER:  Installations are ONLY supported when done with the Repository Creation Utility.  These scripts are for reference only and not supported to be run manually.

    Read the article

  • Wrapping up an Exciting Mobile World Congress

    - by Jacob Lehrbaum
    Its been a busy week here in Barcelona, with noticeably more energy at the show than in 2010. This year, we decided to move the Java booth to the App Planet and really engage with the increasing number of developers that are attending the event. Our booth featured 10 demos and a series of nearly 25 workshops featuring a variety of topics ranging from information about Java Verified, to the use of web technologies with Java ME, to sessions hosted by Operators such as Orange and Telefonica (see image to the left).One of the more popular topics in our booth was the use of Java in the Smart Grid. In our booth we were showing off some of the work of the Hydra Consortium whose goal it is to leverage the emerging smart grid infrastructure to securely enable the delivery of personal health data (weight, blood pressure, etc) from the home to your doctor. If you'd like to learn more about this innovative project, you can watch a video that was filmed at the event featuring Charles Palmer of Onzo. If you'd like to learn more about Java in the Smart Grid, check out our on-demand webinar

    Read the article

  • Five Best Practices for Going Mobile

    - by kellsey.ruppel
    76% of IT decision makers indicate mobile trends will have a high to extremely high impact on their organization. Has your organization gone mobile? Looking for some ideas on how to get started? John Brunswick shares his Best Practices for Going Mobile. Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables social networking, decision support, purchasing, content consumption, and location-based searching, extending experiences beyond what is available in traditional desktop computing.  Organizations rushing to ensure their brand's mobile availability may have taken a tactical approach to implementation, but strategically approaching mobile can enable greater returns on a similar investment and subsequent mobile projects. Here are some strategic considerations for delivering products, services, and information to mobile constituents.  Who, Why, and What? Ask yourself these key questions: who are you attempting to engage through the channel, and why are they engaging you through this channel? What experience will satisfy their needs? What outcome will support your core business? Will you be informing and/or transacting with this person?  Mobile Behavior. Mobile users generally engage for a very specific purpose. Ensure that access to information, services, and products is streamlined. Arriving on a mobile site through search only to be asked to search again frustrates users.  Mobile Is Broad. After establishing the audience and goal, review technology requirements to support them. Do you need a mobile Website, native mobile application, or both? Do you need to support multiple devices? Know the difference between native mobile and mobile Web.  Social Strategy. Users are more likely to trust reviews from peers than marketing information from a vendor. If you are selling products or services, be sure to make social integration part of your strategy.  Content Management. Consider a shared content platform strategy for Web and mobile projects. Fresh, consistent content is important for high-quality experiences. Read more from John Brunswick.We'll also be talking mobile strategies and how you can transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels in a webcast tomorrow. We hope you'll join us!

    Read the article

  • Upcoming Enhancements in AngularJS Integration in NetBeans IDE

    - by Geertjan
    New bleeding edge enhancements in AngularJS support in NetBeans IDE enable many more controllers to be found than in NetBeans IDE 7.4. The next version of NetBeans IDE parses all JavaScript files and checks for defined AngularJS controllers, such as the below: All recognized AngularJS controllers are offered in code completion, as shown below. In other words, code completion works better in finding AngularJS controllers. Another improvement is in the "Go To Declaration" feature. When you click Ctrl+Mouse over the name of a controller inside an NG-controller directive, you will be navigated to the related controller declaration. More accurate results can be shown in code completion mainly because there are changes in the generation of JavaScript virtual sources in an AngularJS page.

    Read the article

  • What Extends JComponent?

    - by Geertjan
    Let NetBeans IDE tell you what extends JComponent: Thanks to Damian from Gdansk in Poland who left a comment in this blog yesterday that explains how to achieve the above: "Let's say you would like to find all implementations of TreeModel interface. I found today an answer: r-click on class/interface -> chose Navigate->Inspect Hierarchy->select second filter at the bottom of opened window or press Alt+Shift+F12 and then Alt+B." In the list above, double-click an item to open it in the NetBeans Java editor. Handy tip, dziekuje bardzo, Damian.

    Read the article

  • Standards Matter: The Battle For Interoperability Continues

    - by michael.rowell
    Great Article, although it is a little dated at this point. Information Week Article Standards Matter: The Battle for Interoperability goes on Summary If you're guilty of relegating standards support to a "nice to have" feature rather than a requirement, you're part of the problem. If you want products to interoperate, be prepared to walk away if a vendor can't prove compliance. Don't be brushed off with promises of standards support "on the road map." The alternative is vendor lock-in and higher costs, including the cost of maintaining systems that don't work together. Standards bodies are imperfect and must do better. The alternative: splintered networks and broken promises. The point: "The secret sauce to a successful 'working standard' isn't necessarily IETF or another longstanding body," says Jonathan Feldman, director of IT services for the city of Asheville, N.C., and an InformationWeek Analytics contributor. "Rather, an earnest and honest effort by a group that has governance outside of a single corporation's control is what's important." In order to have true interoperability vendors as well as customers must be actively engaged in the standards process. Vendors must be willing to truly work together and not be protecting an existing product. Customers must also be willing to truly to work together and not be demanding a solution that only meets their needs but instead meets the needs of all participants. Ultimately, customers must be willing to reward vendor compliance by requiring compliance in products and services that they purchase and deploy. Managers that deploy systems without compliance to standards are only hurting themselves. Standards do matter. When developed openly and deployed compliantly standards deliver interoperability which provides solid business value.

    Read the article

  • Reminder: GlassFish 3.1 Clustering Webinar Today!

    - by alexismp
    Quick reminder for those of you that missed the GlassFish Clustering Webinar from March, we have a new session today (June 28th, 2011). The session is planned at 10:00 a.m. PT / 1:00 p.m. ET / 19.00 CT and you'll need to register first. John Clingan, Principal Product Manager for GlassFish, will walk you through the various clustering features introduced and enhanced in version 3.1. This includes the SSH-based provisioning of clusters (never log in to any machine again), the centralized administration, High Availability and smart failover, load-balancer, Domain Admin Server (DAS) performance improvements, cluster deployments and more. Other than learning about these new product features this is also your chance to ask questions to John and other GlassFish team members. See you there!

    Read the article

  • Meet up with the JCP at JavaOne Latin America

    - by Heather VanCura
    The JCP made it to JavaOne Brazil!  We had a quickie presentation earlier today on JCP.Next that was well attended.  Come to see us at@ the  OTN mini-theatre tomorrow from 12:00-12:15 pm for a quickie on participation.  Then make your way to the Mazanino Sala 12 at 12:30 pm for CON-22250.  "The Java Community Process: How You Can Make a Positive Difference" will be presented with Heather VanCura, JCP,  and Fabio Velloso, SouJava, on Thursday, 6 December, at 12:30 pm.  Find out more about how to participate in the JCP program, the JCP.Next effort and how to get involved with Adopt-a-JSR through your JUG (or on your own)!  Here is the description in Portuguese: A JCP desempenha um papel fundamental na evolução do Java. A sessão vai enfatizar o valor da transparência e participação através da JCP, Grupos de Usuários Java e do programa Adote um JSR. Vamos explorar também algumas das mudanças futuras no processo através da iniciativa JCP.Next, e explicar como você pode se envolver. Traga suas dúvidas, suas sugestões, e suas preocupações. Nós queremos ouvir de você, e incentivá-lo e facilitar a sua participação ativa no avanço da plataforma Java

    Read the article

  • How to Draw Lines on the Screen

    - by Geertjan
    I've seen occasional questions on mailing lists about how to use the NetBeans Visual Library to draw lines, e.g., to make graphs or diagrams of various kinds by drawing on the screen. So, rather than drag/drop causing widgets to be added, you'd want widgets to be added on mouse clicks, and you'd want to be able to connect those widgets together somehow. Via the code below, you'll be able to click on the screen, which causes a dot to appear. When you have multiple dots, you can hold down the Ctrl key and connect them together. A guiding line appears to help you position the dots exactly in line with each other. When you go to File | Print, you'll be able to preview and print the diagram you've created. A picture that speaks 1000 words: Here's the code: public final class PlotterTopComponent extends TopComponent { private final Scene scene; private final LayerWidget baseLayer; private final LayerWidget connectionLayer; private final LayerWidget interactionLayer; public PlotterTopComponent() { initComponents(); setName(Bundle.CTL_PlotterTopComponent()); setToolTipText(Bundle.HINT_PlotterTopComponent()); setLayout(new BorderLayout()); this.scene = new Scene(); this.baseLayer = new LayerWidget(scene); this.interactionLayer = new LayerWidget(scene); this.connectionLayer = new LayerWidget(scene); scene.getActions().addAction(new SceneCreateAction()); scene.addChild(baseLayer); scene.addChild(interactionLayer); scene.addChild(connectionLayer); add(scene.createView(), BorderLayout.CENTER); putClientProperty("print.printable", true); } private class SceneCreateAction extends WidgetAction.Adapter { @Override public WidgetAction.State mousePressed(Widget widget, WidgetAction.WidgetMouseEvent event) { if (event.getClickCount() == 1) { if (event.getButton() == MouseEvent.BUTTON1 || event.getButton() == MouseEvent.BUTTON2) { baseLayer.addChild(new BlackDotWidget(scene, widget, event)); repaint(); return WidgetAction.State.CONSUMED; } } return WidgetAction.State.REJECTED; } } private class BlackDotWidget extends ImageWidget { public BlackDotWidget(Scene scene, Widget widget, WidgetAction.WidgetMouseEvent event) { super(scene); setImage(ImageUtilities.loadImage("org/netbeans/plotter/blackdot.gif")); setPreferredLocation(widget.convertLocalToScene(event.getPoint())); getActions().addAction( ActionFactory.createExtendedConnectAction( connectionLayer, new BlackDotConnectProvider())); getActions().addAction( ActionFactory.createAlignWithMoveAction( baseLayer, interactionLayer, ActionFactory.createDefaultAlignWithMoveDecorator())); } } private class BlackDotConnectProvider implements ConnectProvider { @Override public boolean isSourceWidget(Widget source) { return source instanceof BlackDotWidget && source != null ? true : false; } @Override public ConnectorState isTargetWidget(Widget src, Widget trg) { return src != trg && trg instanceof BlackDotWidget ? ConnectorState.ACCEPT : ConnectorState.REJECT; } @Override public boolean hasCustomTargetWidgetResolver(Scene arg0) { return false; } @Override public Widget resolveTargetWidget(Scene arg0, Point arg1) { return null; } @Override public void createConnection(Widget source, Widget target) { ConnectionWidget conn = new ConnectionWidget(scene); conn.setTargetAnchor(AnchorFactory.createCircularAnchor(target, 10)); conn.setSourceAnchor(AnchorFactory.createCircularAnchor(source, 10)); connectionLayer.addChild(conn); } } ... ... ... Note: The code above was written based on the Visual Library tutorials on the NetBeans Platform Learning Trail, in particular via the "ConnectScene" sample in the "test.connect" package, which is part of the very long list of Visual Library samples referred to in the Visual Library tutorials on the NetBeans Platform Learning Trail. The next steps are to add a reconnect action and an action to delete a dot by double-clicking on it. Would be interesting to change the connecting line so that the length of the line were to be shown, i.e., as you draw a line from one dot to another, you'd see a constantly changing number representing the current distance of the connecting line. Also, once lines are connected to form a rectangle, would be cool to be able to write something within that rectangle. Then one could really create diagrams, which would be pretty cool.

    Read the article

  • Building Queries Systematically

    - by Jeremy Smyth
    The SQL language is a bit like a toolkit for data. It consists of lots of little fiddly bits of syntax that, taken together, allow you to build complex edifices and return powerful results. For the uninitiated, the many tools can be quite confusing, and it's sometimes difficult to decide how to go about the process of building non-trivial queries, that is, queries that are more than a simple SELECT a, b FROM c; A System for Building Queries When you're building queries, you could use a system like the following:  Decide which fields contain the values you want to use in our output, and how you wish to alias those fields Values you want to see in your output Values you want to use in calculations . For example, to calculate margin on a product, you could calculate price - cost and give it the alias margin. Values you want to filter with. For example, you might only want to see products that weigh more than 2Kg or that are blue. The weight or colour columns could contain that information. Values you want to order by. For example you might want the most expensive products first, and the least last. You could use the price column in descending order to achieve that. Assuming the fields you've picked in point 1 are in multiple tables, find the connections between those tables Look for relationships between tables and identify the columns that implement those relationships. For example, The Orders table could have a CustomerID field referencing the same column in the Customers table. Sometimes the problem doesn't use relationships but rests on a different field; sometimes the query is looking for a coincidence of fact rather than a foreign key constraint. For example you might have sales representatives who live in the same state as a customer; this information is normally not used in relationships, but if your query is for organizing events where sales representatives meet customers, it's useful in that query. In such a case you would record the names of columns at either end of such a connection. Sometimes relationships require a bridge, a junction table that wasn't identified in point 1 above but is needed to connect tables you need; these are used in "many-to-many relationships". In these cases you need to record the columns in each table that connect to similar columns in other tables. Construct a join or series of joins using the fields and tables identified in point 2 above. This becomes your FROM clause. Filter using some of the fields in point 1 above. This becomes your WHERE clause. Construct an ORDER BY clause using values from point 1 above that are relevant to the desired order of the output rows. Project the result using the remainder of the fields in point 1 above. This becomes your SELECT clause. A Worked Example   Let's say you want to query the world database to find a list of countries (with their capitals) and the change in GNP, using the difference between the GNP and GNPOld columns, and that you only want to see results for countries with a population greater than 100,000,000. Using the system described above, we could do the following:  The Country.Name and City.Name columns contain the name of the country and city respectively.  The change in GNP comes from the calculation GNP - GNPOld. Both those columns are in the Country table. This calculation is also used to order the output, in descending order To see only countries with a population greater than 100,000,000, you need the Population field of the Country table. There is also a Population field in the City table, so you'll need to specify the table name to disambiguate. You can also represent a number like 100 million as 100e6 instead of 100000000 to make it easier to read. Because the fields come from the Country and City tables, you'll need to join them. There are two relationships between these tables: Each city is hosted within a country, and the city's CountryCode column identifies that country. Also, each country has a capital city, whose ID is contained within the country's Capital column. This latter relationship is the one to use, so the relevant columns and the condition that uses them is represented by the following FROM clause:  FROM Country JOIN City ON Country.Capital = City.ID The statement should only return countries with a population greater than 100,000,000. Country.Population is the relevant column, so the WHERE clause becomes:  WHERE Country.Population > 100e6  To sort the result set in reverse order of difference in GNP, you could use either the calculation, or the position in the output (it's the third column): ORDER BY GNP - GNPOld or ORDER BY 3 Finally, project the columns you wish to see by constructing the SELECT clause: SELECT Country.Name AS Country, City.Name AS Capital,        GNP - GNPOld AS `Difference in GNP`  The whole statement ends up looking like this:  mysql> SELECT Country.Name AS Country, City.Name AS Capital, -> GNP - GNPOld AS `Difference in GNP` -> FROM Country JOIN City ON Country.Capital = City.ID -> WHERE Country.Population > 100e6 -> ORDER BY 3 DESC; +--------------------+------------+-------------------+ | Country            | Capital    | Difference in GNP | +--------------------+------------+-------------------+ | United States | Washington | 399800.00 | | China | Peking | 64549.00 | | India | New Delhi | 16542.00 | | Nigeria | Abuja | 7084.00 | | Pakistan | Islamabad | 2740.00 | | Bangladesh | Dhaka | 886.00 | | Brazil | Brasília | -27369.00 | | Indonesia | Jakarta | -130020.00 | | Russian Federation | Moscow | -166381.00 | | Japan | Tokyo | -405596.00 | +--------------------+------------+-------------------+ 10 rows in set (0.00 sec) Queries with Aggregates and GROUP BY While this system might work well for many queries, it doesn't cater for situations where you have complex summaries and aggregation. For aggregation, you'd start with choosing which columns to view in the output, but this time you'd construct them as aggregate expressions. For example, you could look at the average population, or the count of distinct regions.You could also perform more complex aggregations, such as the average of GNP per head of population calculated as AVG(GNP/Population). Having chosen the values to appear in the output, you must choose how to aggregate those values. A useful way to think about this is that every aggregate query is of the form X, Y per Z. The SELECT clause contains the expressions for X and Y, as already described, and Z becomes your GROUP BY clause. Ordinarily you would also include Z in the query so you see how you are grouping, so the output becomes Z, X, Y per Z.  As an example, consider the following, which shows a count of  countries and the average population per continent:  mysql> SELECT Continent, COUNT(Name), AVG(Population)     -> FROM Country     -> GROUP BY Continent; +---------------+-------------+-----------------+ | Continent     | COUNT(Name) | AVG(Population) | +---------------+-------------+-----------------+ | Asia          |          51 |   72647562.7451 | | Europe        |          46 |   15871186.9565 | | North America |          37 |   13053864.8649 | | Africa        |          58 |   13525431.0345 | | Oceania       |          28 |    1085755.3571 | | Antarctica    |           5 |          0.0000 | | South America |          14 |   24698571.4286 | +---------------+-------------+-----------------+ 7 rows in set (0.00 sec) In this case, X is the number of countries, Y is the average population, and Z is the continent. Of course, you could have more fields in the SELECT clause, and  more fields in the GROUP BY clause as you require. You would also normally alias columns to make the output more suited to your requirements. More Complex Queries  Queries can get considerably more interesting than this. You could also add joins and other expressions to your aggregate query, as in the earlier part of this post. You could have more complex conditions in the WHERE clause. Similarly, you could use queries such as these in subqueries of yet more complex super-queries. Each technique becomes another tool in your toolbox, until before you know it you're writing queries across 15 tables that take two pages to write out. But that's for another day...

    Read the article

  • I Know What I Did This Summer: Put Down Trex Decking

    - by thatjeffsmith
    If you’re wondering why I would bore everyone with my pictures and frequent status updates/tweets from the past week – it’s so I could document the process of refurbishing my deck, or what some would call a porch. When we go to take a vacation, buy a car, do anything – we also read personal blogs to get the real story. So, if you’re curious about what it takes to tackle this sort of project, read on. Skills/Equipment/Manpower We Possessed I took the old decking out by myself. I’m about 230 lbs, more than 6′ tall, and I’m pretty healthy. This took about 8 hours over two afternoons. Three of us put the deck back together. My wife has two engineering degrees. Her father also has two engineering degrees. Lots of brainpower available here. Also, her dad ran the public works department for a country for more than 20 years – so lots and lots of practical experience on hand. We had a compound mitre saw, a skilsaw, 2-3 crowbars, a framing hammer, 3 cordless drills, a corded drill, lots of sawhorses, a power sander, an angle grinder, a 10×10 Coleman canopy tent, a Ford F-150 pickup truck, outdoor speakers and lots of iTunes playlists, plenty of water and cold beer. Why We Did This Our deck was relatively young – it was built in 2005. However, the pressure treated boards must not have been adequately maintained before we bought the house. I had powerwashed the deck every other year and had it stained a few times. The boards just rotted. We’re going to be in the house for a long time, and we wanted something that would look nice and require little maintenance. More bad deck boards The deck boards were in bad shape Things We Learned The two most important things: The hidden fasteners have to be put in JUST right. Wedge them into the grooved board, then bend down the bit that is screwed down. We didn’t do this on the first board and couldn’t get the second board to fit nearly close enough. Watching the official TREX YouTube video helped immensely, and we should have watched that first. When pre-drilling holes for the boards that need screwed down – DO NOT pre-drill through the underlying framing wood. ONLY pre-drill through the TREX itself. The screw won’t seat in the board properly. Instead of sitting down flush with the board, it will stop at the top of the board and just spin. I had to call the the place that sold me the screws to find this out. So about a third of our screws look like crap. If it doesn’t look or feel right – stop everything and pick up your computer or your phone. It’s not right, and it will be much easier to stop and find out why. We didn’t do this, and now I’m going to see every screw that’s not flush with the boards and get upset. Oh well. The Process How much time did it take? Well I spent about 8 hours taking the deck apart. And then the 3 of use spent 8 hours the first day, 10 hours the second day, 8 hours the third, and another 6 hours on the fourth day. That’s like 104 man-hours. We supposedly saved four or five thousand dollars in labor, but don’t do the math here or you might get a bit upset. The main thing is that we got what we wanted, and there won’t be any surprises later. Now for some pictures… This 6”+ pry bar made the destruction of the old deck much easier Most of the joists, once exposed, were OK. This joist wasn’t sitting on ANYTHING before. We think a lazy gas person cut the board to sneak a gas line in. Awesome… These monster lag bolts had to be accounted for when putting in the additional framing The border pattern Sheri wanted to put in required a lot more framing. These were the first boards to go down – we screwed them in as there was no way to attach clips I sat, kicked in the boards, and then drilled these clips in – but my wife was able to go MUCH faster by using her hands to lock the boards in and drill on her knees. I liked locking the board in with my feet when they needed to be ‘encouraged’ to go straight. The first board took FOREVER to go in, but then when we got rolling, we were able to put in a 20′ board in less than 10 minutes. This was end of construction day #2 – we got much further than we thought we would. Ah, the dreaded last 10% – what to do here? Remember those ‘floating’ stringers? Yeah, we fixed that up a bit, too. My wife used a website (and her brain) to calculate exactly how to cut the stringers to give us the rise/run we needed with the proper clearance and all that jazz. The stairs with stringers and toe kicks – this was worth the effort It started raining on us as I screwed down the steps – this we managed to get our shade tent up on the deck to protect us from the rain too The stairs, finished Finished, mostly Good corner shot The top of the stairs Stairs, looking down Celebratory beer In Summary There are a few things we’re not happy with. I think we can fix them up – but later. I have a few things left to finish, rewire the lighting, get the gas grille put back in, and rehang some screen doors. I was expecting this to be a lot worse than it was. If I didn’t have the help, I would have never done it myself. But I’m glad that I did have that help and did do that project. It’s not often you get to spend that kind of qualify time with family and building cool stuff.

    Read the article

  • Search in Projects API

    - by Geertjan
    Today I got some help from Jaroslav Havlin, the creator of the new "Search in Projects API". Below are the steps to create a search provider that finds recently modified files, via a new tab in the "Find in Projects" dialog: Here's how to get to the above result. Create a new NetBeans module project named "RecentlyModifiedFilesSearch". Then set dependencies on these libraries: Search in Projects API Lookup API Utilities API Dialogs API Datasystems API File System API Nodes API Create and register an implementation of "SearchProvider". This class tells the application the name of the provider and how it can be used. It should be registered via the @ServiceProvider annotation.Methods to implement: Method createPresenter creates a new object that is added to the "Find in Projects" dialog when it is opened. Method isReplaceSupported should return true if this provider support replacing, not only searching. If you want to disable the search provider (e.g., there aren't required external tools available in the OS), return false from isEnabled. Method getTitle returns a string that will be shown in the tab in the "Find in Projects" dialog. It can be localizable. Example file "org.netbeans.example.search.ExampleSearchProvider": package org.netbeans.example.search; import org.netbeans.spi.search.provider.SearchProvider; import org.netbeans.spi.search.provider.SearchProvider.Presenter; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = SearchProvider.class) public class ExampleSearchProvider extends SearchProvider { @Override public Presenter createPresenter(boolean replaceMode) { return new ExampleSearchPresenter(this); } @Override public boolean isReplaceSupported() { return false; } @Override public boolean isEnabled() { return true; } @Override public String getTitle() { return "Recent Files Search"; } } Next, we need to create a SearchProvider.Presenter. This is an object that is passed to the "Find in Projects" dialog and contains a visual component to show in the dialog, together with some methods to interact with it.Methods to implement: Method getForm returns a JComponent that should contain controls for various search criteria. In the example below, we have controls for a file name pattern, search scope, and the age of files. Method isUsable is called by the dialog to check whether the Find button should be enabled or not. You can use NotificationLineSupport passed as its argument to set a display error, warning, or info message. Method composeSearch is used to apply the settings and prepare a search task. It returns a SearchComposition object, as shown below. Please note that the example uses ComponentUtils.adjustComboForFileName (and similar methods), that modifies a JComboBox component to act as a combo box for selection of file name pattern. These methods were designed to make working with components created in a GUI Builder comfortable. Remember to call fireChange whenever the value of any criteria changes. Example file "org.netbeans.example.search.ExampleSearchPresenter": package org.netbeans.example.search; import java.awt.FlowLayout; import javax.swing.BoxLayout; import javax.swing.JComboBox; import javax.swing.JComponent; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JSlider; import javax.swing.event.ChangeEvent; import javax.swing.event.ChangeListener; import org.netbeans.api.search.SearchScopeOptions; import org.netbeans.api.search.ui.ComponentUtils; import org.netbeans.api.search.ui.FileNameController; import org.netbeans.api.search.ui.ScopeController; import org.netbeans.api.search.ui.ScopeOptionsController; import org.netbeans.spi.search.provider.SearchComposition; import org.netbeans.spi.search.provider.SearchProvider; import org.openide.NotificationLineSupport; import org.openide.util.HelpCtx; public class ExampleSearchPresenter extends SearchProvider.Presenter { private JPanel panel = null; ScopeOptionsController scopeSettingsPanel; FileNameController fileNameComboBox; ScopeController scopeComboBox; ChangeListener changeListener; JSlider slider; public ExampleSearchPresenter(SearchProvider searchProvider) { super(searchProvider, false); } /** * Get UI component that can be added to the search dialog. */ @Override public synchronized JComponent getForm() { if (panel == null) { panel = new JPanel(); panel.setLayout(new BoxLayout(panel, BoxLayout.PAGE_AXIS)); JPanel row1 = new JPanel(new FlowLayout(FlowLayout.LEADING)); JPanel row2 = new JPanel(new FlowLayout(FlowLayout.LEADING)); JPanel row3 = new JPanel(new FlowLayout(FlowLayout.LEADING)); row1.add(new JLabel("Age in hours: ")); slider = new JSlider(1, 72); row1.add(slider); final JLabel hoursLabel = new JLabel(String.valueOf(slider.getValue())); row1.add(hoursLabel); row2.add(new JLabel("File name: ")); fileNameComboBox = ComponentUtils.adjustComboForFileName(new JComboBox()); row2.add(fileNameComboBox.getComponent()); scopeSettingsPanel = ComponentUtils.adjustPanelForOptions(new JPanel(), false, fileNameComboBox); row3.add(new JLabel("Scope: ")); scopeComboBox = ComponentUtils.adjustComboForScope(new JComboBox(), null); row3.add(scopeComboBox.getComponent()); panel.add(row1); panel.add(row3); panel.add(row2); panel.add(scopeSettingsPanel.getComponent()); initChangeListener(); slider.addChangeListener(new ChangeListener() { @Override public void stateChanged(ChangeEvent e) { hoursLabel.setText(String.valueOf(slider.getValue())); } }); } return panel; } private void initChangeListener() { this.changeListener = new ChangeListener() { @Override public void stateChanged(ChangeEvent e) { fireChange(); } }; fileNameComboBox.addChangeListener(changeListener); scopeSettingsPanel.addChangeListener(changeListener); slider.addChangeListener(changeListener); } @Override public HelpCtx getHelpCtx() { return null; // Some help should be provided, omitted for simplicity. } /** * Create search composition for criteria specified in the form. */ @Override public SearchComposition<?> composeSearch() { SearchScopeOptions sso = scopeSettingsPanel.getSearchScopeOptions(); return new ExampleSearchComposition(sso, scopeComboBox.getSearchInfo(), slider.getValue(), this); } /** * Here we return always true, but could return false e.g. if file name * pattern is empty. */ @Override public boolean isUsable(NotificationLineSupport notifySupport) { return true; } } The last part of our search provider is the implementation of SearchComposition. This is a composition of various search parameters, the actual search algorithm, and the displayer that presents the results.Methods to implement: The most important method here is start, which performs the actual search. In this case, SearchInfo and SearchScopeOptions objects are used for traversing. These objects were provided by controllers of GUI components (in the presenter). When something interesting is found, it should be displayed (with SearchResultsDisplayer.addMatchingObject). Method getSearchResultsDisplayer should return the displayer associated with this composition. The displayer can be created by subclassing SearchResultsDisplayer class or simply by using the SearchResultsDisplayer.createDefault. Then you only need a helper object that can create nodes for found objects. Example file "org.netbeans.example.search.ExampleSearchComposition": package org.netbeans.example.search; public class ExampleSearchComposition extends SearchComposition<DataObject> { SearchScopeOptions searchScopeOptions; SearchInfo searchInfo; int oldInHours; SearchResultsDisplayer<DataObject> resultsDisplayer; private final Presenter presenter; AtomicBoolean terminated = new AtomicBoolean(false); public ExampleSearchComposition(SearchScopeOptions searchScopeOptions, SearchInfo searchInfo, int oldInHours, Presenter presenter) { this.searchScopeOptions = searchScopeOptions; this.searchInfo = searchInfo; this.oldInHours = oldInHours; this.presenter = presenter; } @Override public void start(SearchListener listener) { for (FileObject fo : searchInfo.getFilesToSearch( searchScopeOptions, listener, terminated)) { if (ageInHours(fo) < oldInHours) { try { DataObject dob = DataObject.find(fo); getSearchResultsDisplayer().addMatchingObject(dob); } catch (DataObjectNotFoundException ex) { listener.fileContentMatchingError(fo.getPath(), ex); } } } } @Override public void terminate() { terminated.set(true); } @Override public boolean isTerminated() { return terminated.get(); } /** * Use default displayer to show search results. */ @Override public synchronized SearchResultsDisplayer<DataObject> getSearchResultsDisplayer() { if (resultsDisplayer == null) { resultsDisplayer = createResultsDisplayer(); } return resultsDisplayer; } private SearchResultsDisplayer<DataObject> createResultsDisplayer() { /** * Object to transform matching objects to nodes. */ SearchResultsDisplayer.NodeDisplayer<DataObject> nd = new SearchResultsDisplayer.NodeDisplayer<DataObject>() { @Override public org.openide.nodes.Node matchToNode( final DataObject match) { return new FilterNode(match.getNodeDelegate()) { @Override public String getDisplayName() { return super.getDisplayName() + " (" + ageInMinutes(match.getPrimaryFile()) + " minutes old)"; } }; } }; return SearchResultsDisplayer.createDefault(nd, this, presenter, "less than " + oldInHours + " hours old"); } private static long ageInMinutes(FileObject fo) { long fileDate = fo.lastModified().getTime(); long now = System.currentTimeMillis(); return (now - fileDate) / 60000; } private static long ageInHours(FileObject fo) { return ageInMinutes(fo) / 60; } } Run the module, select a node in the Projects window, press Ctrl-F, and you'll see the "Find in Projects" dialog has two tabs, the second is the one you provided above:

    Read the article

  • How-to populate different select list content per table row

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A frequent requirement posted on the OTN forum is to render cells of a table column using instances of af:selectOneChoices with each af:selectOneChoice instance showing different list values. To implement this use case, the select list of the table column is populated dynamically from a managed bean for each row. The table's current rendered row object is accessible in the managed bean using the #{row} expression, where "row" is the value added to the table's var property. <af:table var="row">   ...   <af:column ...>     <af:selectOneChoice ...>         <f:selectItems value="#{browseBean.items}"/>     </af:selectOneChoice>   </af:column </af:table> The browseBean managed bean referenced in the code snippet above has a setItems and getItems method defined that is accessible from EL using the #{browseBean.items} expression. When the table renders, then the var property variable - the #{row} reference - is filled with the data object displayed in the current rendered table row. The managed bean getItems method returns a List<SelectItem>, which is the model format expected by the f:selectItems tag to populate the af:selectOneChoice list. public void setItems(ArrayList<SelectItem> items) {} //this method is executed for each table row public ArrayList<SelectItem> getItems() {   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory efactory =          fctx.getApplication().getExpressionFactory();          ValueExpression ve =          efactory.createValueExpression(elctx, "#{row}", Object.class);      Row rw = (Row) ve.getValue(elctx);         //use one of the row attributes to determine which list to query and   //show in the current af:selectOneChoice list  // ...  ArrayList<SelectItem> alsi = new ArrayList<SelectItem>();  for( ... ){      SelectItem item = new SelectItem();        item.setLabel(...);        item.setValue(...);        alsi.add(item);   }   return alsi;} For better performance, the ADF Faces table stamps it data rows. Stamping means that the cell renderer component - af:selectOneChoice in this example - is instantiated once for the column and then repeatedly used to display the cell data for individual table rows. This however means that you cannot refresh a single select one choice component in a table to change its list values. Instead the whole table needs to be refreshed, rerunning the managed bean list query. Be aware that having individual list values per table row is an expensive operation that should be used only on small tables for Business Services with low latency data fetching (e.g. ADF Business Components and EJB) and with server side caching strategies for the queried data (e.g. storing queried list data in a managed bean in session scope).

    Read the article

  • America The Vulnerable

    - by Naresh Persaud
    At the Executive Edge this week, Joel Brenner shared his perspective on the state of cyber-security. Today our most critical military and corporate secrets are under attack. In his presentation, Joel shared his perspective on how organizations can can better prepare for the changing security climate. The amount of state sponsored espionage has highlighted weaknesses in our national security infrastructure. The Internet was primarily intended to provide a means of collaboration for non-commercial entities. Today it is the backbone of our digital commerce and digital experience and it was not designed to secure the activities and data we share today.  Check out "America The Vulnerable" and learn more. 

    Read the article

  • Evaluating and Investigating Drug Safety Signals with Public Databases Webinar

    - by Roxana Babiciu
    In this one-hour webinar, BioPharm Systems' Dr. Rodney Lemery, vice president of safety and pharmacovigilance, will review a number of public databases available to use during the evaluation and investigation of identified safety signals. The discussion will focus on the use of free and paid longitudinal healthcare databases available online. After attending this presentation, you will better understand how these data sources can be used in your daily PV work. Read more here

    Read the article

  • History of Mobile Technology

    - by David Dorf
    Over the last ten years, mobile phones have gone through several incremental technology leaps that have added capabilities that impact the retail industry.  I've listed the six major ones below, along with their long-lasting impact. 1. Location In the US, the FCC required mobile phones to implement E911 (emergency calls) by 2006, requiring the caller to be located to within 300 meters.  Back in 2000, GPS was opened up for civilian use, and by 2004 Qualcomm had figured out how to use GPS in mobile phones.  So mobile operators moved from cell tower triangulation to GPS, principally for E911.  But then lots of other uses became apparent, especially navigation.  The earliest mobile apps from retailers made it easy to find nearby stores, and companies are looking at ways to use WiFi triangulation inside stores. 2. Computer Vision In 1997 Philippe Kahn shared a photo of his newborn using a mobile phone thus launching the popularity of instant visual communications.  Over the years the quality of the cameras got better, reaching the point where barcodes could be read around 2008.  That's when Occipital came on the scene with their Red Laser application, which was eventually acquired by eBay.  This opened up the ability for consumers to easily price compare inside stores.  Other interesting apps included Tesco's Wine Finder and Amazon's Price Checker, both allowing products to be identified by picture. 3. Augmented Reality Once the mobile phone had GPS, a video camera, and compass functionality it was suddenly possible to overlay digital information on the screen in real-time.  Yelp, which was using GPS to find nearby merchants, created a backdoor called Monocle on the iPhone that showed nearby merchants overlayed on the video camera view.  Today AR apps are mostly used by retailers for marketing, like Moosejaw's app that undresses models in their catalog. 4. Geo-Fencing So if we're able to track the location of a mobile phone, why not use that context to offer timely information?  My first experience with geo-fencing came courtesy of North Face, the outdoor enthusiast store. When a mobile phone enters a predetermined area, like near a store, a text message is sent to phone with an offer or useful information.  Of course retailers can geo-fence their competitors as well and find out which customers are aren't so loyal. 5. Digital Wallet Mobile payments leverage different technologies such as NFC, QRCodes, bluetooth, and SMS to facilitate communication between the consumers's phone and the retailer's point-of-sale. The key here is the potential to consolidate loyalty cards, coupons, and bank cards into the mobile phone and enable faster checkout.  Nobody does this better than Starbucks today, but McDonald's and Duncan Donuts aren't far behind.  Google, Isis, Paypal, Square, and MCX are all vying for leadership in this area.  If NFC does finally take off, it will be leveraged by retailers in more places than just the POS. 6. Voice Response Mobile Phones have had the ability to interpret simple voice commands for a while, but Google and Amazon were the first to use voice to allow searches for products.  Allowing searches by text, barcode, and voice makes it easy to comparison shop in the aisles.  Walmart even uses voice to build shopping lists, and if the Siri API is even opened we could see lots more innovation in this area.

    Read the article

< Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >