Search Results

Search found 18454 results on 739 pages for 'oracle thoughts'.

Page 511/739 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • Experiencing the New Social Enterprise

    - by kellsey.ruppel
    Social media and networking tools, popularly known as Web 2.0 technologies, are rapidly transforming user expectations of enterprise systems. Many organizations are investing in these new tools to cultivate a modern user experience in an “Enterprise 2.0” environment that unlocks the full potential of traditional IT systems and fosters collaboration in key business processes. Here are some key points and takeaways from some of the keynotes yesterday at the Enterprise 2.0 Conference: Social networks continue to forge complex connections between people, processes, and content, facilitating collaboration and the sharing of information The customer of today lives inside of Facebook, on your web, or has an app for that – and they have a question – and want an answer NOW Empowered employees are able to connect to colleagues, build relationships, develop expertise, self-select projects of interest to them, and expand skill sets well beyond their formal roles A fundamental promise of Enterprise 2.0 is that ideas will be generated and shared by everyone across the organization, leading to increased innovation, agility, and competitive advantage How well is your organizating delivering on these concepts? Are you able to successfully bring together people, processes and content? Are you providing the social tools your employees want and need? Are you experiencing the new social enterprise?

    Read the article

  • EclipseLink does multitenancy. Today.

    - by alexismp
    So you heard Java EE 7 will be about the cloud, but that didn't mean a whole lot to you. Then it was characterized as PaaS, something in between IaaS and SaaS. And finally it all became clear when referenced as support for multitenancy. Or did it? When it comes to JPA and persistence is general, multitenancy is defined as the ability to share a database schema among various groups of users (i.e. tenants). This means that there is no database setup or reconfiguration required as the data is co-located in the same database. EclipseLink 2.3 (the Indigo train release) let's you do just that by supporting tenant discriminator column(s) via annotations or XML with applications providing values for these discriminators via an API or PU configuration. Check out details here. EclipseLink 2.3 is scheduled to be the default and supported JPA provider for GlassFish 3.1.1. Another nice feature of this release is the ability to extend persistence units on the fly. The GlassFish Podcast has an interview up with EclipseLink's Doug Clarke. Expect more on multitenancy across the Java EE spectrum as the specification work progressed.

    Read the article

  • Skynet Big Data Demo Using Hexbug Spider Robot, Raspberry Pi, and Java SE Embedded (Part 3)

    - by hinkmond
    In Part 2, I described what connections you need to make for this demo using a Hexbug Spider Robot, a Raspberry Pi, and Java SE Embedded for programming. Here are some photos of me doing the soldering. Software engineers should not be afraid of a little soldering work. It's all good. See: Skynet Big Data Demo (Part 2) One thing to watch out for when you open the remote is that there may be some glue covering the contact points. Make sure to use an Exacto knife or small screwdriver to scrape away any glue or non-conductive material covering each place where you need to solder. And after you are done with your soldering and you gave the solder enough time to cool, make sure all your connections are marked so that you know which wire goes where. Give each wire a very light tug to make sure it is soldered correctly and is making good contact. There are lots of videos on the Web to help you if this is your first time soldering. Check out Laday Ada's (from adafruit.com) links on how to solder if you need some additional help: http://www.ladyada.net/learn/soldering/thm.html If everything looks good, zip everything back up and meet back here for how to connect these wires to your Raspberry Pi. That will be it for the hardware part of this project. See, that wasn't so bad. Hinkmond

    Read the article

  • Take Two: Comparing JVMs on ARM/Linux

    - by user12608080
    Although the intent of the previous article, entitled Comparing JVMs on ARM/Linux, was to introduce and highlight the availability of the HotSpot server compiler (referred to as c2) for Java SE-Embedded ARM v7,  it seems, based on feedback, that everyone was more interested in the OpenJDK comparisons to Java SE-E.  In fact there were two main concerns: The fact that the previous article compared Java SE-E 7 against OpenJDK 6 might be construed as an unlevel playing field because version 7 is newer and therefore potentially more optimized. That the generic compiler settings chosen to build the OpenJDK implementations did not put those versions in a particularly favorable light. With those considerations in mind, we'll institute the following changes to this version of the benchmarking: In order to help alleviate an additional concern that there is some sort of benchmark bias, we'll use a different suite, called DaCapo.  Funded and supported by many prestigious organizations, DaCapo's aim is to benchmark real world applications.  Further information about DaCapo can be found at http://dacapobench.org. At the suggestion of Xerxes Ranby, who has been a great help through this entire exercise, a newer Linux distribution will be used to assure that the OpenJDK implementations were built with more optimal compiler settings.  The Linux distribution in this instance is Ubuntu 11.10 Oneiric Ocelot. Having experienced difficulties getting Ubuntu 11.10 to run on the original D2Plug ARMv7 platform, for these benchmarks, we'll switch to an embedded system that has a supported Ubuntu 11.10 release.  That platform is the Freescale i.MX53 Quick Start Board.  It has an ARMv7 Coretex-A8 processor running at 1GHz with 1GB RAM. We'll limit comparisons to 4 JVM implementations: Java SE-E 7 Update 2 c1 compiler (default) Java SE-E 6 Update 30 (c1 compiler is the only option) OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 CACAO build 1.1.0pre2 OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 JamVM build-1.6.0-devel Certain OpenJDK implementations were eliminated from this round of testing for the simple reason that their performance was not competitive.  The Java SE 7u2 c2 compiler was also removed because although quite respectable, it did not perform as well as the c1 compilers.  Recall that c2 works optimally in long-lived situations.  Many of these benchmarks completed in a relatively short period of time.  To get a feel for where c2 shines, take a look at the first chart in this blog. The first chart that follows includes performance of all benchmark runs on all platforms.  Later on we'll look more at individual tests.  In all runs, smaller means faster.  The DaCapo aficionado may notice that only 10 of the 14 DaCapo tests for this version were executed.  The reason for this is that these 10 tests represent the only ones successfully completed by all 4 JVMs.  Only the Java SE-E 6u30 could successfully run all of the tests.  Both OpenJDK instances not only failed to complete certain tests, but also experienced VM aborts too. One of the first observations that can be made between Java SE-E 6 and 7 is that, for all intents and purposes, they are on par with regards to performance.  While it is a fact that successive Java SE releases add additional optimizations, it is also true that Java SE 7 introduces additional complexity to the Java platform thus balancing out any potential performance gains at this point.  We are still early into Java SE 7.  We would expect further performance enhancements for Java SE-E 7 in future updates. In comparing Java SE-E to OpenJDK performance, among both OpenJDK VMs, Cacao results are respectable in 4 of the 10 tests.  The charts that follow show the individual results of those four tests.  Both Java SE-E versions do win every test and outperform Cacao in the range of 9% to 55%. For the remaining 6 tests, Java SE-E significantly outperforms Cacao in the range of 114% to 311% So it looks like OpenJDK results are mixed for this round of benchmarks.  In some cases, performance looks to have improved.  But in a majority of instances, OpenJDK still lags behind Java SE-Embedded considerably. Time to put on my asbestos suit.  Let the flames begin...

    Read the article

  • Using the Java SE 8 Date Time API with JPA 2.1

    - by reza_rahman
    Most of you are hopefully aware of the new Date Time API included in Java SE 8. If you are not, you should check them out right now using the Java Tutorial Trail dedicated to the topic. It is a significantly leap forward in processing temporal data in Java. For those who already use Joda-Time the changes will look very familiar - very simplistically speaking the Java SE 8 feature is basically Joda-Time standardized. Quite naturally you will likely want to use the new Date Time APIs in your JPA domain model to better represent temporal data. The problem is that JPA 2.1 will not support the new API out of the box. So what are you to do? Fortunately you can make use of fairly simple JPA 2.1 Type Converters to use the Date Time API in your JPA domain classes. Steven Gertiser shows you how to do it in an extremely well written blog entry. Besides explaining the problem and the solution the entry is actually very good for getting a better understanding of JPA 2.1 Type Converters as well. I think such a set of converters may be a good fit for Apache DeltaSpike as a Java EE 7 extension? In case you are wondering about Java SE 8 support in the JPA specification itself, Nick Williams has already entered an excellent, well researched JIRA entry asking for such support in a future version of the JPA specification that's well worth looking at. Another possibility of course is for JPA providers to start supporting the Date Time API natively before anything is formalized in the specification. What do you think?

    Read the article

  • RPi and Java Embedded GPIO: Connecting LEDs

    - by hinkmond
    Next, we need some low-level peripherals to connect to the Raspberry Pi GPIO header. So, we'll do what's called a "Fry's Run" in Silicon Valley, which means we go shop at the local Fry's Electronics store for parts. In this case, we'll need some breadboard jumper wires (blue wires in photo), some LEDs, and some resistors (for the RPi GPIO, 150 ohms - 300 ohms would work for the 3.3V output of the GPIO ports). And, if you want to do other projects, you might as well by a breadboard, which is a development board with lots of holes in it. Ask a Fry's clerk for help. Or, better yet, ask the customer standing next to you in the electronics components aisle for help. (Might be faster) So, go to your local hobby electronics store, or go to Fry's if you have one close by, and come back here to the next blog post to see how to hook these parts up. Hinkmond

    Read the article

  • Management Software in Java for Networked Bus Systems

    - by Geertjan
    Telemotive AG develops complex networked bus systems such as Ethernet, MOST, CAN, FlexRay, LIN and Bluetooth as well as in-house product developments in infotainment, entertainment, and telematics related to driver assistance, connectivity, diagnosis, and e-mobility. Devices such as those developed by Telemotive typically come with management software, so that the device can be configured. (Just like an internet router comes with management software too.) The blue AdmiraL is a development and analysis device for the APIX (Automotive Pixel Link) technology. Here is its management tool: The blue PiraT is an optimised multi-data logger, developed by Telemotive specifically for the automotive industry. With the blue PiraT the communication of bus systems and control units are monitored and relevant data can be recorded very precisely. And here is how the tool is managed: Both applications are created in Java and, as clearly indicated in many ways in the screenshots above, are based on the NetBeans Platform. More details can be found on the Telemotive site.

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • WhatsApp Chat Messenger available for Java ME phones

    - by hinkmond
    If you like sending SMS text messages from your Java ME tech-enabled mobile phone without having to pay carrier charges, then WhatsApp Messenger is for you. See: Don't pay, Use Java ME WhatsApp Here's a quote: Free WhatsApp Messenger Download For S40 Java Phone now Available. The IM chat app whatsapp was earlier targeted on high end/cross-platform mobile phone with support for messaging exchange, SMS messages, send and receive pictures, exchange of videos and audios, share your location with your contacts etc. So, be a cheap-skate. It's OK. You're entitled. As long as you use WhatsApp and Java ME technology, that is. Hinkmond

    Read the article

  • Parleys Testimonial at GlassFish Community Event, JavaOne 2012

    - by arungupta
    Parleys.com is an e-learning platform that provide a unique experience of online and offline viewing presentations, with integrated movies and chaptering, from the top notch developer conferences and about 40 JUGs all around the world. Stephan Janssen (the Devoxx man and Parleys webmaster) presented at the GlassFish Community Event at JavaOne 2012 and shared why they moved from Tomcat to GlassFish. The move paid off as GlassFish was able to handle 2000 concurrent users very easily. Now they are also running Devoxx CFP and registration on this updated infrastructure. The GlassFish clustering, the asadmin CLI, application versioning, and JMS implementation are some of the features that made them a happy user. Recently they migrated their application from Spring to Java EE 6. This allows them to get locked into proprietary frameworks and also avoid 40MB WAR file deployments. Stateless application, JAX-RS, MongoDB, and Elastic Search is their magical forumla for success there. Watch the video below showing him in full action: More details about their infrastructure is available here.

    Read the article

  • Several New Hints

    - by Ondrej Brejla
    Hi all! Today we would like to introduce you some of our new experimental hints for NetBeans 7.2. They are called: Unused Use Statement and Immutable Variables. Unused Use Statement This hint is quite simple. It highlights (underlines) your use statements, which are not used. Typical use case is after some refactoring, when you forgot to remove some obsolete use statements. This hint warns you on them and allows you to remove them easily. Just click on the hint bulb in the gutter and select Remove Unused Use Statement. And of course, it works in multiple use statements combined too. Immutable Variables The next one is the hint which checks too many assignments into a variable. And why? That's simple. Mostly you should use just one assignment into one variable. But sometimes you are lazy and you do something like: But it's quite wrong, because what you really do is: And that's exactly the case, when our new hint warns you, that Too many assignments (2) into variable $foo occured. Nothing more. Yes, we know that there are some cases, where could be more assignments and no warning should occur, e.g.: Because maybe one likes longer increment syntax more than the short one. So we tried to handle these cases to don't bother you if it's not a need. Note: We are almost sure that this hint doesn't cover all your use cases, because there are a lot of them. So if you find something strange, write it into our bugzilla so we can handle it better for you. Thanks for your patience! And the last thing is, that you can set the number of allowed assignments in Tools -> Options -> Editor -> Hints -> PHP: Immutable Variables. Note: This hint works just for a common variables, not for fields. We have an enhancement request for that and it should be implemented in next version of NetBeans (probably 7.3). And that's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (product php, component Editor). Thanks.

    Read the article

  • Capgemini report - Business Cloud: The State of Play Shifts Rapidly

    - by Javier Puerta
    Capgemini has published a recent survey on the state of play of cloud adoption. The report indicates "clear evidence that the business, rather than purely IT, is becoming involved in driving Cloud strategy, and pioneering its use for ‘edge’ growth initiatives."  Ron Tolido, Senior Vice President and Chief Technology Officer of Applications Continental Europe at Capgemini, was one of the keynote speakers at our Exadata & Manageability Partner Community event in Istanbul in March. He is one of the drivers of this survey. Read his article "3 Key Cloud Insights for 2013". You an download the full report here:  "Business Cloud: The State of Play Shifts Rapidly - Fresh Insights into Cloud Adoption Trends"

    Read the article

  • Annotation Processor for Superclass Sensitive Actions

    - by Geertjan
    Someone creating superclass sensitive actions should need to specify only the following things: The condition under which the popup menu item should be available, i.e., the condition under which the action is relevant. And, for superclass sensitive actions, the condition is the name of a superclass. I.e., if I'm creating an action that should only be invokable if the class implements "org.openide.windows.TopComponent",  then that fully qualified name is the condition. The position in the list of Java class popup menus where the new menu item should be found, relative to the existing menu items. The display name. The path to the action folder where the new action is registered in the Central Registry. The code that should be executed when the action is invoked. In other words, the code for the enablement (which, in this case, means the visibility of the popup menu item when you right-click on the Java class) should be handled generically, under the hood, and not every time all over again in each action that needs this special kind of enablement. So, here's the usage of my newly created @SuperclassBasedActionAnnotation, where you should note that the DataObject must be in the Lookup, since the action will only be available to be invoked when you right-click on a Java source file (i.e., text/x-java) in an explorer view: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import org.netbeans.sbas.annotations.SuperclassBasedActionAnnotation; import org.openide.awt.StatusDisplayer; import org.openide.loaders.DataObject; import org.openide.util.NbBundle; import org.openide.util.Utilities; @SuperclassBasedActionAnnotation( position=30, displayName="#CTL_BrandTopComponentAction", path="File", type="org.openide.windows.TopComponent") @NbBundle.Messages("CTL_BrandTopComponentAction=Brand") public class BrandTopComponentAction implements ActionListener { private final DataObject context; public BrandTopComponentAction() { context = Utilities.actionsGlobalContext().lookup(DataObject.class); } @Override public void actionPerformed(ActionEvent ev) { String message = context.getPrimaryFile().getPath(); StatusDisplayer.getDefault().setStatusText(message); } } That implies I've created (in a separate module to where it is used) a new annotation. Here's the definition: package org.netbeans.sbas.annotations; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; @Retention(RetentionPolicy.SOURCE) @Target(ElementType.TYPE) public @interface SuperclassBasedActionAnnotation { String type(); String path(); int position(); String displayName(); } And here's the processor: package org.netbeans.sbas.annotations; import java.util.Set; import javax.annotation.processing.Processor; import javax.annotation.processing.RoundEnvironment; import javax.annotation.processing.SupportedAnnotationTypes; import javax.annotation.processing.SupportedSourceVersion; import javax.lang.model.SourceVersion; import javax.lang.model.element.Element; import javax.lang.model.element.TypeElement; import javax.lang.model.util.Elements; import org.openide.filesystems.annotations.LayerBuilder.File; import org.openide.filesystems.annotations.LayerGeneratingProcessor; import org.openide.filesystems.annotations.LayerGenerationException; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = Processor.class) @SupportedAnnotationTypes("org.netbeans.sbas.annotations.SuperclassBasedActionAnnotation") @SupportedSourceVersion(SourceVersion.RELEASE_6) public class SuperclassBasedActionProcessor extends LayerGeneratingProcessor { @Override protected boolean handleProcess(Set annotations, RoundEnvironment roundEnv) throws LayerGenerationException { Elements elements = processingEnv.getElementUtils(); for (Element e : roundEnv.getElementsAnnotatedWith(SuperclassBasedActionAnnotation.class)) { TypeElement clazz = (TypeElement) e; SuperclassBasedActionAnnotation mpm = clazz.getAnnotation(SuperclassBasedActionAnnotation.class); String teName = elements.getBinaryName(clazz).toString(); String originalFile = "Actions/" + mpm.path() + "/" + teName.replace('.', '-') + ".instance"; File actionFile = layer(e).file( originalFile). bundlevalue("displayName", mpm.displayName()). methodvalue("instanceCreate", "org.netbeans.sbas.annotations.SuperclassSensitiveAction", "create"). stringvalue("type", mpm.type()). newvalue("delegate", teName); actionFile.write(); File javaPopupFile = layer(e).file( "Loaders/text/x-java/Actions/" + teName.replace('.', '-') + ".shadow"). stringvalue("originalFile", originalFile). intvalue("position", mpm.position()); javaPopupFile.write(); } return true; } } The "SuperclassSensitiveAction" referred to in the code above is unchanged from how I had it in yesterday's blog entry. When I build the module containing two action listeners that use my new annotation, the generated layer file looks as follows, which is identical to the layer file entries I hard coded yesterday: <folder name="Actions"> <folder name="File"> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"> <attr name="displayName" stringvalue="Process Action Listener"/> <attr methodvalue="org.netbeans.sbas.annotations.SuperclassSensitiveAction.create" name="instanceCreate"/> <attr name="type" stringvalue="java.awt.event.ActionListener"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.ActionListenerSensitiveAction"/> </file> <file name="org-netbeans-sbas-impl-BrandTopComponentAction.instance"> <attr bundlevalue="org.netbeans.sbas.impl.Bundle#CTL_BrandTopComponentAction" name="displayName"/> <attr methodvalue="org.netbeans.sbas.annotations.SuperclassSensitiveAction.create" name="instanceCreate"/> <attr name="type" stringvalue="org.openide.windows.TopComponent"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.BrandTopComponentAction"/> </file> </folder> </folder> <folder name="Loaders"> <folder name="text"> <folder name="x-java"> <folder name="Actions"> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.shadow"> <attr name="originalFile" stringvalue="Actions/File/org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"/> <attr intvalue="10" name="position"/> </file> <file name="org-netbeans-sbas-impl-BrandTopComponentAction.shadow"> <attr name="originalFile" stringvalue="Actions/File/org-netbeans-sbas-impl-BrandTopComponentAction.instance"/> <attr intvalue="30" name="position"/> </file> </folder> </folder> </folder> </folder>

    Read the article

  • Schema Based Code Completion for NetBeans Platform Applications

    - by Geertjan
    Toni's recent blog entry provides, among several other interesting things, instructions for something I've been wanting to cover for a long time, which is schema based code completion: The above is a sample I created via Toni's tutorial, using the schema described here: http://www.w3schools.com/schema/schema_example.asp The support for the Navigator ain't bad either, especially considering I didn't do any coding at all to get all this: And here's where you can find the whole sample: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.2/misc/ShipOrder

    Read the article

  • Mobile or the Science of Programming Languages

    - by user12652314
    Just two things to share today. First is some news in the mobile computing space and a pretty cool new relationship developing with DubLabs and AT&T to enable a student-centric mobile experience for our Campus Solution customers. And second, is an interesting article shared by a friend on Research in Programming Languages related to STEM education, a key story element to my project with Americas Cup and iED, but also to our national interest

    Read the article

  • Will We See More Partisan Splits from the SEC?

    - by Theresa Hickman
    The SEC's lawsuit against Goldman Sachs has made recent headlines. The fact that the SEC seems to be growing more litigious by making examples out of invididuals and companies is not the topic of my blog. The most interesting thing about this case is that the 5 SEC commissioners did not vote unanimously to bring the lawsuit. The commissioners had a 3-2 partisan split. Ms. Shapiro (a registered independent) voted with the 2 Democrats. Split votes rarely happen by the SEC, especially when they are enforcing actions against firms they regulate. I wonder if we will be seeing more of these partisan split votes when it comes to other decisions, say IFRS adoption? I know both the Democrats and Republicans have stated that they support a unified accounting standard. However, will the Republicans want to push back simply because there is a Democrat in office? (Seems childish to me, but I never understood politics). I think Ms. Shapiro will most definitely want a unanimous consensus related to the IFRS topic. There is already talk that we will be seeing more SEC split votes in the future. For example, there will most likely be a split vote regarding Obama's proposed financial-regulatory overhaul. I don't see why IFRS would be exempt. I really hope it doesn't happen because the last thing we need is more road blocks on our IFRS road trip.

    Read the article

  • Inline template efficiency

    - by Darryl Gove
    I like inline templates, and use them quite extensively. Whenever I write code with them I'm always careful to check the disassembly to see that the resulting output is efficient. Here's a potential cause of inefficiency. Suppose we want to use the mis-named Leading Zero Detect (LZD) instruction on T4 (this instruction does a count of the number of leading zero bits in an integer register - so it should really be called leading zero count). So we put together an inline template called lzd.il looking like: .inline lzd lzd %o0,%o0 .end And we throw together some code that uses it: int lzd(int); int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } We compile the code with some amount of optimisation, and look at the resulting code: $ cc -O -xtarget=T4 -S lzd.c lzd.il $ more lzd.s .L77000018: /* 0x001c 11 */ lzd %o0,%o0 /* 0x0020 9 */ ld [%i1],%i3 /* 0x0024 11 */ st %o0,[%i2] /* 0x0028 9 */ add %i3,1,%i0 /* 0x002c */ cmp %i0,999 /* 0x0030 */ ble,pt %icc,.L77000018 /* 0x0034 */ st %i0,[%i1] What is surprising is that we're seeing a number of loads and stores in the code. Everything could be held in registers, so why is this happening? The problem is that the code is only inlined at the code generation stage - when the actual instructions are generated. Earlier compiler phases see a function call. The called functions can do all kinds of nastiness to global variables (like 'a' in this code) so we need to load them from memory after the function call, and store them to memory before the function call. Fortunately we can use a #pragma directive to tell the compiler that the routine lzd() has no side effects - meaning that it does not read or write to memory. The directive to do that is #pragma no_side_effect(<routine name), and it needs to be placed after the declaration of the function. The new code looks like: int lzd(int); #pragma no_side_effect(lzd) int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } Now the loop looks much neater: /* 0x0014 10 */ add %i1,1,%i1 ! 11 ! { ! 12 ! c=lzd(c); /* 0x0018 12 */ lzd %o0,%o0 /* 0x001c 10 */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000018 /* 0x0024 */ nop

    Read the article

  • Salon du E-commerce et Social CRM B2B

    - by Valérie De Montvallon
    Nous participions au Salon du E-commerce et Social CRM B2B en septembre dernier et nous vous proposons la vidéo réalisée par Les décideurs de la relation client. Découvrez des avis d'experts de la Relation Client pour en savoir toujours plus sur le Social CRM BtoB. Pour le BtoB, la gestion de la Relation Client semble bien simple quand il s’agit de récolter des informations à partir d’appels téléphoniques, d’entretiens physiques ou d’emails. Toutefois, la tâche s’enhardit sur les réseaux sociaux. Ces plateformes sont-elles réellement adaptées au BtoB ? Comment procéder quand on se lance ? Quels sont les pièges à éviter ? Quels sont les éléments qui laissent à penser que le Social CRM BtoB est une vraie tendance de la Relation Client ? Autant de questions auxquelles les experts rencontrés ont apporté des éléments de réponse. Vous découvrirez l'interview de notre expert, Khalid Madarbokus, qui s'exprime sur la remontée d'informations depuis les médias sociaux au sein des départements d'une entreprise B2B (à 3:20)

    Read the article

  • Framework Folders and Duplicate File Names

    - by Kevin Smith
    I have been working with Framework folders a little bit in the past few days and found one unexpected behavior that is different from Contribution Folders (Folders_g). If you try and check a file into a Framework Folder that already exists in the folder it will allow it and rename the file for you. In Folders_g this would have generated an error and prevented you from checking in the file. A quick check of the Framework Folder configuration settings in the Application Administrator’s Guide for Content Server does not show a configuration parameter to control this. I'm still thinking about this and not sure if I like this new behavior or not. I guess from a user perspective this more closely aligns Framework Folders to how Windows handle duplicate file names, but if you are migrating from Folders_g and expect a duplicate file name to be rejected, this might cause you some problems.

    Read the article

  • New qeep app for Java ME feature phones: meet qeepy people

    - by hinkmond
    Is it "qeepy" if you meet people by using your cell phone instead of, you know, talking to them? Nah. Not if it's a Java ME cell phone! See: Use Qeep to Meet Peeps Here's a quote: Qeep is a free app, and compatible with over 1,000 Java-enabled feature phones... ... Qeep is one of the world's largest mobile gaming and social discovery platforms. Members of the mobile community can play live multiplayer games; blog photos; send sound attacks, text messages and virtual gifts; and meet new friends worldwide. So, go on. Go, use Qeep on your Java ME feature phone to play multiplayer games, blog photos, and meet new friends worldwide. No one will think that you're weird... Not much, at least. Hinkmond

    Read the article

  • Sources of NetBeans Gradle Plugin

    - by Geertjan
    Here is where you can find the sources of the latest and greatest NetBeans Gradle plugin: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.1/misc/GradleSupport To use it, download the sources above, open the sources into the IDE (which must be 7.1.1 or above), then you'll have a NetBeans module. Right-click it to run the module into a new instance of NetBeans IDE. In the Options window's Miscellaneous tab, there's a Gradle subtab for setting the Gradle location. In the New File dialog, in the Other category, you'll find a template named "Empty Gradle file". Make sure to name it "build" and to put it in the root directory of the application (by leaving the Folder field empty, you're specifying it should be created in the root directory). You'll then be able to expand the build.gradle file: Double-click a task to run it. When you open the file, it opens in the Groovy editor, if the Groovy editor is installed. When you make changes in the file, the list of tasks, shown above, is automatically recreated. It's at a really early stage of development and it would be great if developers out there would be interested in adding more features to it.

    Read the article

  • Look after your tribe of Pygmies with Java ME technology

    - by hinkmond
    Here's a game that is crossing over from the iDrone to the more lucrative Java ME cell phone market. See: Pocket God on Java ME Here's a quote: Massive casual iPhone hit Pocket God has parted the format waves and walked over to the land of Java mobiles, courtesy of AMA. The game sees you take control of an omnipotent, omnipresent, and (possibly) naughty deity, looking after your tribe of Pygmies... Everyone knows that there are more Java ME feature phones than grains of sand on a Pocket God island beach. So, when iDrone games are done piddlying around on a lesser platform, they move over to Java ME where things are really happening. Hinkmond

    Read the article

  • Essbase - FormatString

    - by THE
    A look at the documentation for "Typed Measures" shows:"Using format strings, you can format the values (cell contents) of Essbase database members in numeric type measures so that they appear, for query purposes, as text, dates, or other types of predefined values. The resultant display value is the cell’s formatted value (FORMATTED_VALUE property in MDX). The underlying real value is numeric, and this value is unaffected by the associated formatted value."To actually switch ON the use of typed measures in general, you need to navigate to the outline properties: open outline select properties change "Typed Measures enable" to TRUE (click to enlarge) As an example, I created two additional members in the ASOSamp outline. - A member "delta Price" in the Measures (Accounts) Dimension with the Formula: ([Original Price],[Curr_year])-([Original Price],[Prev_year])This is equivalent to the Variance Formula used in the "Years" Dimension. - A member "Var_Quickview" in the "Years" Dimension with the same formula as the "Variance" Member.This will be used to simply display a second cell with the same underlying value as Variance - but formatted using Format String hence enabling BOTH in the same report. (click to enlarge) In the outline you now select the member you want the Format String associated with and change the "associated Format String" in the Member Properties.As you can see in this example an IIF statement reading:MdxFormat(IIF(CellValue()< 0,"Negative","Positive" ) ) has been chosen for both new members.After applying the Format String changes and running a report via SmartView, the result is: (click to enlarge) reference: Essbase Database Admin Guide ,Chapter 12 "Working with Typed Measures "

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >