Search Results

Search found 14414 results on 577 pages for 'oracle irm'.

Page 518/577 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • OSB, Service Callouts and OQL - Part 2

    - by Sabha
    This section of the "OSB, Service Callouts and OQL" blog posting will delve into thread dump analysis of OSB server and detecting threading issues relating to Service Callout using ThreadLogic. We would also use Heap Dump and OQL to identify the related Proxies and Business services. The previous section dealt with threading model used by OSB to handle Route and Service Callouts. Please refer to the blog post for more details.

    Read the article

  • Need a Quick Sure Method to Produce a Formatted Explain Plan? This will help!

    - by user702295
    Please use the following on the production machine to get formatted explain plan and sql trace using the SLOW sql (e.g. 'T_COMB_LIST.COMB_ID = 216') or any other value that takes longer: -- Open new session is SQL*Plus */ -- Make sure you are using updated PLAN_TABLE -- This can be done by dropping it and recreate it by running: -- SQL> @?/rdbms/admin/utlxplan.sql) set lines 1000 set pages 1000 spool xplan_1.txt EXPLAIN PLAN FOR <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> @?/rdbms/admin/utlxplp spool off EXIT --Open a second session is SQL*Plus ALTER SESSION SET max_dump_file_size = unlimited; ALTER SESSION SET tracefile_identifier = '10046'; ALTER SESSION SET statistics_level = ALL; ALTER SESSION SET events '10046 trace name context forever, level 12'; <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> select 'verify cursor closed' from dual; ALTER SYSTEM SET EVENTS '10046 trace name context off'; EXIT Make sure spooled file is formatted properly and that the 10046 trace has relevant explain plan in it.  Please Upload both files (10046 trace is generated in udump). Need instructions to find udump?   sqlplus "/ as sysdba" show parameters dump_dest This will show you bdump, cdump and udump locations.

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. [Read More] 

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.   To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings. If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs. #Change log size to 10MB and number of logs to 20FileSizeLimit=10485760FileCountLimit=20 This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

    Read the article

  • Russian Hydrodynamic Modeling, Prediction, and Visualization in Java

    - by Geertjan
    JSC "SamaraNIPIoil", located in Samara, Russia, provides the following applications for internal use. SimTools. Used to create & manage reservoir history schedule files for hydrodynamic models. The main features are that it lets you create/manage schedule files for models and create/manage well trajectory files to use with schedule files. DpSolver. Used to estimate permeability cubes using pore cube and results of well testing. Additionally, the user visualizes maps of vapor deposition polymerization or permeability, which can be visualized layer by layer. The base opportunities of the application are that it enables calculation of reservoir vertical heterogeneity and vertical sweep efficiency; automatic history matching of sweep efficiency; and calculations using Quantile-Quantile transformation and vizualization of permeability cube and other reservoir data. Clearly, the two applications above are NetBeans Platform applications.

    Read the article

  • Running external commands improved a bit

    - by Tomas Mysik
    Hi all, today we would like to show you one small improvement related to running external commands (e.g. generating documentation, running framework commands etc.) which will be available in NetBeans 7.3. First, have a look at the screenshot: As you can see, the first line represents the command that is being executed. In case of any error, this command can be easily copy & pasted to the console for deeper investigation (and proper bug reports ;). Also please notice that the Output window now supports background colors. That's all for today, as always, please test it and report all the issues or enhancements you find in NetBeans Bugzilla (component php, subcomponent Code).

    Read the article

  • Smarter Search Results in NetBeans IDE 7.2

    - by Geertjan
    After you search your code using NetBeans IDE (using Ctrl-F for "Find" or Ctrl-H for "Replace"), you see the Search Results window, which looks like this: At least, the above is how it looks in NetBeans IDE 7.2. Before that, you didn't have all those extra columns (which can be displayed in the Search Results window after clicking the small button top right in the view) and you also didn't have the quick search (which is invoked by typing directly into the Search Results window), as can be seen here: So, the Search Results window now provides a lot more info than before. Being able to know the path to a file I've found, as well as the last modification date, file size, and the number of matches within the file, is useful at the end of a search process. In the NetBeans IDE 7.2 New & Noteworthy, the above changes are described in the Utilities section, as well as in the Quick Search in OutlineView section, where you can read that these are generic solutions that can be used in your own OutlineViews. Other OutlineViews in NetBeans IDE 7.2, such as the Debugger window, now also have these new features. A related article worth reading is Beefed Up Code Navigation Tools in NetBeans IDE 7.2. 

    Read the article

  • YouTube: Realtime Graph Sharing on the NetBeans Platform

    - by Geertjan
    Yet another really cool movie by the Maltego team in South Africa, this time showing Visual Library widgets in their NetBeans Platform application shared in realtime between different users of the Maltego open source intelligence gathering and analytics software: What you see above is Maltego CaseFile. Below you find out more about it in the latest blog entry on the Maltego site: http://maltego.blogspot.be/2013/11/maltego-casefile-v2-released.html

    Read the article

  • HTML Tidy in NetBeans IDE

    - by Geertjan
    First step in integrating HTML Tidy (via its JTidy implementation) into NetBeans IDE: The reason why I started doing this is because I want to integrate this into the pluggable analyzer functionality of NetBeans IDE that I recently blogged about, i.e., where the FindBugs functionality is found. So a logical first step is to get it working in an Action class, after which I can port it into the analyzer infrastructure: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionReferences; import org.openide.awt.ActionRegistration; import org.openide.cookies.EditorCookie; import org.openide.cookies.LineCookie; import org.openide.loaders.DataObject; import org.openide.text.Line; import org.openide.text.Line.ShowOpenType; import org.openide.util.Exceptions; import org.openide.util.NbBundle.Messages; import org.openide.windows.IOProvider; import org.openide.windows.InputOutput; import org.openide.windows.OutputEvent; import org.openide.windows.OutputListener; import org.openide.windows.OutputWriter; import org.w3c.tidy.Tidy; @ActionID(     category = "Tools", id = "org.jtidy.TidyAction") @ActionRegistration(     displayName = "#CTL_TidyAction") @ActionReferences({     @ActionReference(path = "Loaders/text/html/Actions", position = 150),     @ActionReference(path = "Editors/text/html/Popup", position = 750) }) @Messages("CTL_TidyAction=Run HTML Tidy") public final class TidyAction implements ActionListener {     private final DataObject context;     private final OutputWriter writer;     private EditorCookie ec = null;     public TidyAction(DataObject context) {         this.context = context;         ec = context.getLookup().lookup(org.openide.cookies.EditorCookie.class);         InputOutput io = IOProvider.getDefault().getIO("HTML Tidy", false);         io.select();         writer = io.getOut();     }     @Override     public void actionPerformed(ActionEvent ev) {         Tidy tidy = new Tidy();         try {             writer.reset();             StringWriter stringWriter = new StringWriter();             PrintWriter errorWriter = new PrintWriter(stringWriter);             tidy.setErrout(errorWriter);             tidy.parse(context.getPrimaryFile().getInputStream(), System.out);             String[] split = stringWriter.toString().split("\n");             for (final String string : split) {                 final int end = string.indexOf(" c");                 if (string.startsWith("line")) {                     writer.println(string, new OutputListener() {                         @Override                         public void outputLineAction(OutputEvent oe) {                             LineCookie lc = context.getLookup().lookup(LineCookie.class);                             int lineNumber = Integer.parseInt(string.substring(0, end).replace("line ", ""));                             Line line = lc.getLineSet().getOriginal(lineNumber - 1);                             line.show(ShowOpenType.OPEN, Line.ShowVisibilityType.FOCUS);                         }                         @Override                         public void outputLineSelected(OutputEvent oe) {}                         @Override                         public void outputLineCleared(OutputEvent oe) {}                     });                 }             }         } catch (IOException ex) {             Exceptions.printStackTrace(ex);         }     } } The string parsing above is ugly but gets the job done for now. A problem integrating this into the pluggable analyzer functionality is the limitation of its scope. The analyzer lets you select one or more projects, or individual files, but not a folder. So it doesn't work on folders in the Favorites window, for example, which is where I'd like to apply HTML Tidy, across multiple folders via the analyzer functionality. That's a bit of a bummer that I'm hoping to get around somehow.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Students Can Discover JavaOne for Free

    - by Tori Wieldt
    Students can get a FREE Discover Pass for JavaOne to learn a bit about Java and network with experienced Java professionals. To be eligible, students must be • At least 18 years-old • Taking a minimum of 6 units • Enrolled in a nonprofit institution of learning Students will get all the benefits of a Discover attendee, which includes: JavaOne and OpenWorld keynotes; Exhibition Halls; and, space permitting, students can also attend JavaOne Technical and BOF (Birds-of-a-Feather) sessions, and HOLs (Hands-on Labs). Don't miss out on this opportunity for a real education with a FREE Discover Pass!

    Read the article

  • Public JCP EC Meeting on 10 June

    - by Heather VanCura
    The next JCP EC Meeting is open to the public!  We hope you will join us on Tuesday, 10 June at 08:00 AM PDT.  Agenda includes a discussion on the latest JCP.Next news--JSR 364, Broadening JCP Membership. We hope you will join us, but if you cannot attend, the recording and materials will also be public on the JCP.org multimedia page. Meeting details below. ------------------------------------------------------- Topic: Public EC Meeting Date: Tuesday, June 10, 2014 Time: 8:00 am, Pacific Daylight Time (San Francisco, GMT-07:00) Meeting Number: 807 111 580 Meeting Password: 6893 ------------------------------------------------------- To start or join the online meeting ------------------------------------------------------- Go to https://jcp.webex.com/ ------------------------------------------------------- Audio conference information ------------------------------------------------------- +1 (866) 682-4770 (US) Conference code: 5731908 Security code: 6893 Global access numbers

    Read the article

  • MySQL, An Ideal Choice for The Cloud

    - by Bertrand Matthelié
    As the world's most popular web database, MySQL has quickly become the leading database for the cloud, with most providers offering MySQL-based services. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Access our Resource Kit to discover: Why MySQL has become the leading database in the cloud, and how it addresses the critical attributes of cloud-based deployments How ISVs rely on MySQL to power their SaaS offerings Best practices to deploy the world’s most popular open source database in public and private clouds Normal 0 false false false EN-US X-NONE X-NONE You will also find out how you can leverage MySQL together with Hadoop and other technologies to unlock the value of Big Data, either on-premise or in the cloud. Access white papers, webinars, case studies and other resources in /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} our Resource Kit now!

    Read the article

  • Silicon Valley Code Camp 2012 - Submit Your Talks

    - by arungupta
    Silicon Valley Code Camp follows three rules: Given by/for the community Always free Never occur during work hours I've spoken there at 2011, 2010, 2009, 2008, and 2007 and have again submitted a talk this year as well, and will submit more! Its one of the best organically grown code camps with the attendance constantly growing over the past 6 years. Here is a chart that shows how the number of conferences attendees that registered and attended and the sessions delivered over past 6 years. If you wonder why there is such a big gap between "registered" and "attended" that's because this event is FREE! Yes, 100% free. If you are in and around Silicon Valley then you have no reason to not participate/speak at SVCC. You have the opportunity to meet all the local JUG leaders and the community "rockstars" :-) Date: Oct 6/7, 2012 Venue: Foothill College, 12345, El Monte Road, Los Altos Hills, CA Submit today or register!

    Read the article

  • Play Framework Plugin for NetBeans IDE (Part 2)

    - by Geertjan
    After I published part 1 of this series, the first external contribution (i.e., not by me) to the NetBeans plugin for Play Framework 2 was committed today. Yann D'Isanto added support for creating new Play projects: That completely solves a problem I was working on, in a different way altogether. I was working on creating a new wizard that would call "play new" on the command line and pass into the command line the entered name and application type (1 for Java and 2 for Scala). However, Yann's solution is better, at least in the sense in that it works, as opposed to mine which didn't, because of problems I continually had with the command line, since one needs to press Enter multiple times on the Play command line when creating new projects, which I wasn't able to simulate in my new wizard. Yann's approach is simply to follow the approach taken in the Project Type Module Tutorial, which explains how to register a project sample in the IDE. I was inspired by Yann's contribution, especially when he mentioned that one needs to build Play projects on the command line. So, I added a new menu item on the right-click of a project for building Play projects, which simply passes "play compile" to the command line for the current project: Via the IDE's main menu bar, you can also Build and Run the application, though the code for the Clean function needs to be added still, which would be a cool thing for anyone out there to add, by using all the existing code and then passing "play clean compile" to the command line. Something else that Yann added is an Options Window extension, thanks to the Options Window Module Tutorial, for registering the Play installation, which is a step forward from my hard coded solution. I changed things slightly so that, when Build or Run are selected, without a Play installation being defined, the Options window opens, displaying the tab that Yann created, shown below. Notice that there's no Browse button, which would be a simple next step for anyone else to contribute. A small tip is to use the FileChooserBuilder from the NetBeans IDE APIs when working on the Browse button: Looking forward to more contributions to the Play Framework 2 plugin for NetBeans IDE. Just leave a message here with your ideas, with your java.net name, and then I'll add you to the project on java.net, where I very much look forward to your contributions: http://java.net/projects/nbplay/sources/nbplay

    Read the article

  • JCP.next.3: time to get to work

    - by Patrick Curran
    As I've previously reported in this blog, we planned three JSRs to improve the JCP’s processes and to meet our members’ expectations for change. The first - JCP.next.1, or more formally JSR 348: Towards a new version of the Java Community Process - was completed in October 2011. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. However, because we wanted to complete this JSR quickly we deliberately postponed a number of more complex items, including everything that would require modifying the JSPA (the legal agreement that members sign when they join the organization) to a follow-on JSR. The second JSR (JSR 355: JCP Executive Committee Merge) is in progress now and will complete later this year. This JSR is even simpler than the first, and is focused solely on merging the two Executive Committees into one for greater efficiency and to encourage synergies between the Java ME and Java SE platforms. Continuing the momentum to move Java and the JCP forward we have just filed the third JSR (JCP.next.3) as JSR 358: A major revision of the Java Community Process. This JSR will modify the JSPA as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons we expect to spend a considerable amount of time working on it - at least a year, and probably more. The current version of the JSPA was created back in 2002, although some minor changes were introduced in 2005. Since then the organization and the environment in which we operate have changed significantly, and it is now time to revise our processes to ensure that they meet our current needs. We have a long list of topics to be considered, including the role of independent implementations (those not derived from the Reference Implementation), licensing and open source, ensuring that our new transparency requirements are implemented correctly, compatibility policy and TCKs, the role of individual members, patent policy, and IP flow. The Expert Group for JSR 358, as with all process-change JSRs, consists of all members of the Executive Committees. Even though the JSR has just been filed we started discussions on the various topics several months ago (see the EC's meeting minutes for details) and our EC members - including the new members who joined within the last year or two - are actively engaged. Now it's your opportunity to get involved. As required by version 2.8 of our Process (introduced with JSR 348) we will conduct all our business in the open. We have a public java.net project where you can follow and participate in our work. All of our deliberations will be copied to a public Observer mailing list, we'll track our issues on a public Issue Tracker, and all our documents (meeting agendas and minutes, task lists, working drafts) will be published in our Document Archive. We're just getting started, but we do want your input. Please visit us on java.net where you can learn how to participate. Let's get to work...

    Read the article

  • Single CAS web application in a cluster

    - by Dolf Dijkstra
    Recently a customer wanted to set up a cluster of CAS nodes to be used together with WebCenter Sites. In the process of setting this up they realized that they needed to create a web application per managed server. They did not want to have this management burden but would like to have one web application deployed to multiple nodes. The reason that there is a need for a unique application per node is that the web-application contains information that needs to be unique per node, the postfix for the ticket id.  My customer would like to externalize the node specific configuration to either a specific classpath per managed server or to system properties set at startup.It turns out that the postfix for ticket ids is managed through a property host.name and that this property can be externalized.The host.name property is used in: /webapps/cas/WEB-INF/spring-configuration/uniqueIdGenerators.xmlIt is set in /webapps/cas/WEB-INF/spring-configuration/propertyFileConfigurer.xmlin a PropertyPlaceholderConfigurer.The documentation for PropertyPlaceholderConfigurer:http://static.springsource.org/spring/docs/2.0.x/api/org/springframework/beans/factory/config/PropertyPlaceholderConfigurer.htmlThis indicates that the properties defined through the PropertyPlaceHolderConfigurer can be externalized.To enable this externalization you would need to change host.properties so it is generic for all the managed servers and thus can be reused for all the managed servers: host.name=${cluster.node.id}Next step is to change the startup scripts for the managed servers and add a system property for -Dcluster.node.id=<something unique and stable>.Viola, the postfix is externalized and the web application can be shared amongst the cluster nodes.

    Read the article

  • IT Sicherheit - (K)ein Thema für Admins!

    - by Anne Manke
    Laut einer Umfrage von Synetics unter rund 1500 Administratoren anlässlich des Tages des Systemadministrators beschäftigen sich 38 % der Befragten überhaupt nicht mit IT-Sicherheit und IT-Sicherheitsmanagement. Nur 32,6 % beschäftigen sich lediglich ein bis 2 Stunden pro Wochen mit dem Thema Sicherheit für ihre Umgebung. Die meiste Arbeitszeit wird indes auf das Servermanagement verwendet. 46,2% der Befragten gaben an 10 Stunden und mehr für das verwalten und instandhalten der Server zu verwenden. Die Befragten sind bei Unternehmen mit einer Mitarbeitergröße zwischen 101 und 1000 Mitarbeiter angestellt. Den kompletten Artikel finden Sie auf der Internetseite Heise-Online!

    Read the article

  • Basic Puppet installation with Solaris 11.2 beta

    - by user13366125
    At the recent announcement we talked a lot about the Puppet integration. But how do you set it up? I want to show this in this blog entry. However this example i'm using is even useful in practice. Due to the extremely low overhead of zones i'm frequently seeing really large numbers of zones on a single system. Changing /etc/hosts or changing an SMF service property on 3 systems is not that hard. Doing it on a system with 500 zones is ... let say it diplomatic ... a job you give to someone you want to punish. Puppet can help in this case making of managing the configuration and to ease the distribution. You describe the changes you want to make in a file or set of file called manifest in the Puppet world and then roll them out to your servers, no matter if they are virtual or physical. A warning at first: Puppet is a really,really vast topic. This article is really basic and it doesn't goes more than just even toe's deep into the possibilities and capabilities of Puppet. It doesn't try to explain Puppet ... just how you get it up and running and do basic tests. There are many good books on Puppet. Please read one of them, and the concepts and the example will get much clearer immediately. (more)

    Read the article

  • RPi and Java Embedded: Hard-Float Support is Here!!!

    - by hinkmond
    You wanted Java Embedded with Hardware Floating Point support to install on a default Raspian environment for your Raspberry Pi? Well, you just got your wish. Merry Christmas! See: Developer JDK 8 for ARM w/Hard-Float Here's a quote: The Java SE 8 Developer Preview Release for ARM including JavaFX (JDK 8) on Linux has been made available at http://jdk8.java.net. The Developer Preview is provided to the community to get feedback on the ongoing progress of the project. Developers can start developing applications using this build of Java SE 8 on an ARM device, such as the a Raspberry Pi. It's a regular JDK (Java SE 8 preview) for your Raspberry Pi, so you should note this means there is a javac (and the other typical JDK tools) available to compile your Java apps right there on the device! Woot! I'll cover step-by-step instructions how to do that in a future blog post. Stay tuned... Hinkmond

    Read the article

  • Avoid overwriting of logs

    - by Koppar
    What usually happens is, the logs get filled up and begin getting overwritten, which makes them useless. To avoid it, use these 2 properties in the logging.properties file to suit your requirement: java.util.logging.FileHandler.count  = x (it is 1 by default, increase it to a bigger value) This number specifies the number of log files that can be created before overwriting starts. For instance, if you set it to 5, java0.log, java1.log ... java5.log will be created to log details so more information can be captured Likewise, java.util.logging.FileHandler.limit  would specify the size of each log.

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >