Search Results

Search found 18454 results on 739 pages for 'oracle thoughts'.

Page 486/739 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • First JSRs Proposed for Java EE 7

    - by Jacob Lehrbaum
    With the approval of Java SE 7 and Java SE 8 JSRs last month, attention is now shifting towards the Java EE platform.  While functionality pegged for Java EE 7 was previewed at least as early as Devoxx, the filing of these JSRs marks the first, officially proposed, specifications for the next generation of the popular application server standard.  Let's take a quick look at the proposed new functionality.Java Persistence API 2.1The first of the new proposed specifications is JSR 338: Java Persistence API (JPA) 2.1. JPA is designed for use with both Java EE and Java SE and: "deals with the way relational data is mapped to Java objects ("persistent entities"), the way that these objects are stored in a relational database so that they can be accessed at a later time, and the continued existence of an entity's state even after the application that uses it ends. In addition to simplifying the entity persistence model, the Java Persistence API standardizes object-relational mapping." (more about JPA)JAX-RS 2.0The second of the new Java specifications that have been proposed is JSR 339, otherwise known as JAX-RS 2.0. JAX-RS provides an API that enables the easy creation of web services using the Representational State Transfer (REST) architecture.  Key features proposed in the new JSR include a Client API, improved support for URIs, a Model-View-Controller architecture and much more!More informationOfficial proposal for Java Persistence 2.1 (jcp.org)Official proposal for JAX-RS 2.0 (jcp.org)Kicking off Java EE 7 with 2 JSRs: JAX-RS 2.0 / JPA 2.1 (the Aquarium)

    Read the article

  • Patrick Curran Session-Keynote at DOAG 2012

    - by Heather VanCura
    Patrick Curran, Chair of the  Java Community Process (JCP) and Director of the JCP Program Management Office, will be speaking this week at the DOAG 2012 event in Nuremberg Germany. Keynote Java: Restructuring the Java Community ProcessNovember, 22nd | 09:00-09:45 am The Java Community Process (JCP) plays a critical role in the evolution of Java.  This keynote will explain how the JCP is organized and how interested members of the Java community - commercial organizations, non-profits, Java user-groups, and individual developers - work together to advance the Java language and platforms. It will then discuss recent and upcoming changes to the JCP's structure and operating processes, and will explain how these changes ('JCP.next') will make the organization more efficient and will ensure that its work is carried out in a more open and more transparent manner.

    Read the article

  • Geek Bike Ride Sao Paulo

    - by Tori Wieldt
    What do you do on sunny Saturday in Sao Paulo when you have several Java enthusiasts, street lanes closed off for bicyclists, new cool Duke jerseys, and some wonderful bike angels to provide a tour through the city? A GEEK BIKE RIDE, of course! The weekend before JavaOne Latin America, the Sao Paulo geek bike ride was held today. We had 20+ riders and a wonderful route that took us from the Bicycle Park to and through downtown. It was a 30Km ride, but our hosts were kind enough to give riders the option to take the subway for part of the trip. Thanks to our wonderful bike angels, the usual rental bike problems like rubbing brakes, dropped chains, and even a flat tire were handled with ease.  The geek bike ride wasn't just for out-of-towners. Loiane Groner, who lives in Sao Paulo said, "I love the Geek Bike Ride! The last time I was in these parts of the city, I think I was five years-old!" A good time was had by all. (My only crash of the day was riding up an escalator with my bike. Luckily, the bikers with me were so busy helping me that no pictures were taken. <phew>) Enjoy this video by Hugo Lavalle You can also view Hugo's pictures. More pictures to come on Stephen Chin's blog.  So, what city is up next?  

    Read the article

  • Adding SSE support in Java EE 8

    - by delabassee
    SSE (Server-Sent Event) is a standard mechanism used to push, over HTTP, server notifications to clients.  SSE is often compared to WebSocket as they are both supported in HTML 5 and they both provide the server a way to push information to their clients but they are different too! See here for some of the pros and cons of using one or the other. For REST application, SSE can be quite complementary as it offers an effective solution for a one-way publish-subscribe model, i.e. a REST client can 'subscribe' and get SSE based notifications from a REST endpoint. As a matter of fact, Jersey (JAX-RS Reference Implementation) already support SSE since quite some time (see the Jersey documentation for more details). There might also be some cases where one might want to use SSE directly from the Servlet API. Sending SSE notifications using the Servlet API is relatively straight forward. To give you an idea, check here for 2 SSE examples based on the Servlet 3.1 API.  We are thinking about adding SSE support in Java EE 8 but the question is where as there are several options, in the platform, where SSE could potentially be supported: the Servlet API the WebSocket API JAX-RS or even having a dedicated SSE API, and thus a dedicated JSR too! Santiago Pericas-Geertsen (JAX-RS Co-Spec Lead) conducted an initial investigation around that question. You can find the arguments for the different options and Santiago's findings here. So at this stage JAX-RS seems to be a good choice to support SSE in Java EE. This will obviously be discussed in the respective JCP Expert Groups but what is your opinion on this question?

    Read the article

  • OAGi Architecture Council OAGIS Ten Work Group Completes first round review of Concepts for OAGIS Te

    - by michael.rowell
    Today the OAGi Architecture Council OAGIS Ten Work group completed the first level review of concepts for existing content for OAGIS Ten. This is one of the first milestones for OAGIS Ten. In doing this the concepts of key objects (the Nouns) have been identified along with the key context for their use. While OAGIS Ten remains a work-in-process the work group shows progress. Going forward the other councils will provide additional input to these and there own concepts and the contexts for each. Additionally, sub groups will focus on concepts for given domains. Stay tuned for future progress. If anyone is interested in joining the effort. OAGi membership is open to anyone, please see the OAGi Web site.

    Read the article

  • Digital Agenda in the EU means open standards after all

    - by trond-arne.undheim
    European Commission Vice President Neelie Kroes speech on Openness at the heart of the EU Digital Agenda at Open Forum Europe 2010 Summit in Brussels refocuses the EU Digital Agenda on open standards. I say the speech scores a 90/100, smooth, smart, a little vicious at the fringes, maybe? Anyway, it shows the strategy might age and implement well. This is Dutch pragmatism at its best. The EU Digital Agenda (I give it an 85/100 score), while laudable, stops short of using the term. The next step for the European Commission is defining the term open standards. If they do that, and do it right, Vice President Kroes will go into history as having made a significant contribution towards global progress in e-government by possibly eradicating lock-in forever. Moreover, she will put Europe's SMEs in a better position to succeed in a global IT market filled with barriers to entry from players not fully understanding, using, or unpacking standards. Kroes' interesting suggestion that she will now explore a "legal proposal" on interoperability that will have an impact on all IT companies operating in the European market is more up for debate. An interoperability directive? One run by DG COMP or one run by DG INFSO, telecom style? Would something like that work? Would the industry like it? Would it help European governments? Possibly, if done right. The good thing was, Kroes pointed out that she will look for input from the industry. Kroes' track record is one of not being scared of taking on the Titans. She also wants to enact real, positive, lasting change. "I will not go anywhere", she said. All of that is good. And she does understand the importance of open standards. Let's now start discussing the details. Implementing the Digital Agenda is not simple. It requires collaboration across the various Directorates in the European Commission. Mounting a new Interoperability directive is also never attempted before. Getting it right is important. Even possibly finding out it cannot be done right and choosing a more light weight approach that is equally effective would be bold. Go Kroes!

    Read the article

  • New videos: Getting started with embedded Java and more

    - by terrencebarr
    OTN just published a set of six videos related to embedded Java: Java at ARM TechCon Java SE Embedded Development Made Easy, Part 1 Java SE Embedded Development Made Easy, Part 2 Mobile Database Synchronization – Healthcare Demonstration Tomcat Micro Cluster Java Embedded Partnerships Good stuff. Enjoy! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: embedded, Java Embedded, Java SE Embedded, video

    Read the article

  • Dart and NetBeans IDE 7.4

    - by Geertjan
    Here's the start of Dart in NetBeans IDE. Basic Dart editing support is done and on saving a Dart file the related JavaScript files are automatically generated. In the context of an HTML5 application in NetBeans IDE, that gives you deep integration with the embedded browser and, even better, Chrome, as well as Chrome Developer Tools. Below, notice that the "Sunflower Spectacular" H1 element is selected (click the image to enlarge it to get a better view), which is therefore highlighted in the live DOM view in the bottom left, as well as in the CSS Styles window in the top right, from where the CSS styles can be edited and from where the related files can be opened in the IDE. Identical features are available for Chrome, as well as on Android and iOS. And if you like that, watch this YouTube movie showing how Chrome Developer Tools integration can fit directly into the workflow below. Anyone want to help get this plugin further? What's needed: Much deeper Dart editing support, i.e., right now only very basic syntax coloring is provided, i.e., an ANTLR lexer is integrated into the NetBeans syntax coloring infrastructure. Parsing, error checking, code completion, and some small code templates are needed. A new panel is needed in the Project Properties dialog on NetBeans HTML5 projects for enabling Dart (i.e., similar to enabling Cordova), at which point the "dart.js" file and other Dart artifacts should be added to the project, so that a Dart project is immediately generated and the application should be immediately deployable. Whenever changes are made to a Dart file, Dart should run in the background to create the Dart artifacts in some hidden way, so that the user doesn't see all the Dart artifacts as is currently the case. Some way of recognizing Dart projects (there's a YAML file as an identifier) and creating NetBeans HTML5 projects from that, i.e., from Dart projects outside the IDE. I think that's all... The official Dart Editor is based on Eclipse and requires a massive download of heaps of Eclipse bundles. Compare that to the NetBeans equivalent, which is a very small "HTML5 and PHP" bundle (60 MB), available here, together with the above small Dart plugin. Plus, when you look at how NetBeans IDE integrates with a bunch of Google-oriented projects, i.e., Chrome, Chrome Developer Tools, and Android (via Cordova), that's a pretty interesting toolbox for anyone using Dart. And bear in mind that ANTLRWorks, Microchip, and heaps of other organizations have built and are building their tools on top of NetBeans!

    Read the article

  • Preventing Users From Accessing wp-admin

    - by Gary Pendergast
    If you have a WordPress site that you allow people to sign up for, you often don’t want them to be able to access wp-admin. It’s not that there are any security issues, you just want to ensure that your users are accessing your site in a predictable manner.To block non-admin users from getting into wp-admin, you just need to add the following code to your functions.php, or somewhere similar:add_action( 'init', 'blockusers_init' );   function blockusers_init() { if ( is_admin() && ! current_user_can( 'administrator' ) ) { wp_redirect( home_url() ); exit; } }Ta-da! Now, only administrator users can access wp-admin, everyone else will be re-directed to the homepage.

    Read the article

  • Devoxx UK JCP & Adopt-a-JSR activities

    - by Heather VanCura
    Devoxx UK starts this week!  The JCP Program is organizing many activities throughout the conference, including some tables in the Hackergarten area on 12-13 June.  Topics include Java EE, Data Grids, Java SE 8 (Lambdas and Date & Time API), Money & Currency API and OpenJDK.  We will have two book signings by Richard Warburton and Peter Pilgrim during the Hackergarten - free signed copy of their books at these times - first come, first served (limited quantities available).  Thursday night is the party and the Birds of a Feather (BoF) sessions - come with your favorite questions and topics related to the JCP, Adopt-a-JSR and Adopt OpenJDK Programs!  See below for the schedule of activities; I will fill in details for each session tomorrow.    Thursday 12 June 10:20 - 12:50 Java EE -- Arun Gupta 13:30-17:00 Lambdas/Date & Time API --Richard Warburton & Raoul-Gabriel Urma (also a book signing with Richard Warburon during the afternoon break) 14:30-17:30 Data Grids - Peter Lawrey 14:30-18:00 Money & Currency -- Anatole Tresch 18:45 Adopt OpenJDK BoF session (Java EE BoF runs concurrently) 19:45 JCP & Adopt-a-JSR BoF session Friday 13 June 10:20-13:00 OpenJDK -- Mani Sarkar  10:20- 14:30 Money & Currency -- Anatole Tresch 10:20 - 13:00 Java EE -- Peter Pilgrim 13:00-13:30 Peter Pilgrim Java EE 7 Book signing sponsored by JCP @ lunch time 13:30 - 15:30 JCP.Next/JSR 364 -- Heather VanCura

    Read the article

  • Best Of 2010

    - by Mike Dietrich
    Hi there, in Australia, Japan, Singapore and many other countries it's already 2011 - but Germany and the US is still some time until midnight :-) To round up the year you'll find a few off-topic pictures from 2010. You might click on the pictures to get a better resolution. Enjoy ... Moscow - Red Square Tokyo Train - Cell Phone Mania Great Chinese Wall near Beijing Hong Kong by Night Yearing Station Winery, Yarra - Victoria, Australia Dublin, Ireland - during the ash cloud - no comment - Liberty It's sometime foggy in SF Singapore Opera Stockholm - Gamla Stan Unbelievable white beach at Camps Bay, Clifton, Capetown Words fail me ... Mike

    Read the article

  • Getting Started with MySQL Cluster, Hands-on Lab, Next Saturday, MySQL Connect

    - by user13819847
    Hi!I'm speaking at MySQL Connect next Saturday, Sep. 29. My Session is a hands-on lab (HOL) on MySQL Cluster.If you are interested in familiarize a bit with MySQL Cluster this is definitely a session for you. I will start by briefly introducing MySQL Cluster and its architecture. Then I will guide you through the needed steps to install a local MySQL Cluster, connect to it (using the command line), monitor its logs, and safe shutdown it.We will hence have a chance to see which are the most common commands using in MySQL Cluster administration (e.g. Cluster backup) as well as the most common operations (e.g. online datanode add). Cluster's users and customers have the flexibility to choose whether they prefer to use a SQL or NoSQL approach to connect to MySQL Cluster, so, during the last part of the HOL, we will see how to connect to MySQL Cluster using the NoSQL NDB API. If there is enough time at the end, we will also compile and execute some simple Java programs that make use of Connector J to connect to the SQL Nodes of our Cluster. I hope this HOL will be of your interest! Below are some details if you decide to attend:When:   Saturday Sep. 29, 4 pmWhere: Hilton San Francisco - Plaza Room AIf you are interested in other MySQL Cluster sessions, you will find the info you need in this post. The full program of the MySQL Connect Conference is here, and if you are not registered yet, remember that you can still save US $300 over the on-site fee – Register Now! See you at MySQL Connect!

    Read the article

  • Tip #13 java.io.File Surprises

    - by ByronNevins
    There is an assumption that I've seen in code many times that is totally wrong.  And this assumption can easily bite you.  The assumption is: File.getAbsolutePath and getAbsoluteFile return paths that are not relative.  Not true!  Sort of.  At least not in the way many people would assume.  All they do is make sure that the beginning of the path is absolute.  The rest of the path can be loaded with relative path elements.  What do you think the following code will print? public class Main {    public static void main(String[] args) {        try {            File f = new File("/temp/../temp/../temp/../");            File abs  = f.getAbsoluteFile();            File parent = abs.getParentFile();            System.out.println("Exists: " + f.exists());            System.out.println("Absolute Path: " + abs);            System.out.println("FileName: " + abs.getName());            System.out.printf("The Parent Directory of %s is %s\n", abs, parent);            System.out.printf("The CANONICAL Parent Directory of CANONICAL %s is %s\n",                        abs, abs.getCanonicalFile().getParent());            System.out.printf("The CANONICAL Parent Directory of ABSOLUTE %s is %s\n",                        abs, parent.getCanonicalFile());            System.out.println("Canonical Path: " + f.getCanonicalPath());        }        catch (IOException ex) {            System.out.println("Got an exception: " + ex);        }    }} Output: Exists: trueAbsolute Path: D:\temp\..\temp\..\temp\..FileName: ..The Parent Directory of D:\temp\..\temp\..\temp\.. is D:\temp\..\temp\..\tempThe CANONICAL Parent Directory of CANONICAL D:\temp\..\temp\..\temp\.. is nullThe CANONICAL Parent Directory of ABSOLUTE D:\temp\..\temp\..\temp\.. is D:\tempCanonical Path: D:\ Notice how it says that the parent of d:\ is d:\temp !!!The file, f, is really the root directory.  The parent is supposed to be null. I learned about this the hard way! getParentXXX simply hacks off the final item in the path. You can get totally unexpected results like the above. Easily. I filed a bug on this behavior a few years ago[1].   Recommendations: (1) Use getCanonical instead of getAbsolute.  There is a 1:1 mapping of files and canonical filenames.  I.e each file has one and only one canonical filename and it will definitely not have relative path elements in it.  There are an infinite number of absolute paths for each file. (2) To get the parent file for File f do the following instead of getParentFile: File parent = new File(f, ".."); [1] http://bt2ws.central.sun.com/CrPrint?id=6687287

    Read the article

  • SOA Suite 11g Releases

    - by antony.reynolds
    A few years ago Mars renamed one of the most popular chocolate bars in England from Marathon to Snickers.  Even today there are still some people confused by the name change and refer to them as marathons. Well last week we released SOA Suite 11.1.1.3 and BPM Suite 11.1.1.3 as well as OSB 11.1.1.3.  Seems that some people are a little confused by the naming and how to install these new versions, probably the same Brits who call Snickers a Marathon :-).  Seems that calling all the revisions 11g Release 1 has caused confusion.  To help these people I have created a little diagram to show how you can get the latest version onto your machine.  The dotted lines indicate dependencies. Note that SOA Suite 11.1.1.3 and BPM 11.1.1.3 are provided as a patch that is applied to SOA Suite 11.1.1.2.  For a new install there is no need to run the 11.1.1.2 RCU, you can run the 11.1.1.3 RCU directly. All SOA & BPM Suite 11g installations are built on a WebLogic Server base.  The WebLogic 11g Release 1 version is 10.3 with an additional number indicating the revision.  Similarly the 11g Release 1 SOA Suite, Service Bus and BPM Suite have a version 11.1.1 with an additional number indicating the revision.  The final revision number should match the final revision in the WebLogic Server version.  The products are also sometimes identified by a Patch Set number, indicating whether this is the 11gR1 product with the first or second patch set.  The table below show the different revisions with their alias. Product Version Base WebLogic Alias SOA Suite 11gR1 11.1.1.1 10.3.1 Release 1 or R1 SOA Suite 11gR1 11.1.1.2 10.3.2 Patch Set 1 or PS1 SOA Suite 11gR1 11.1.1.3 10.3.3 Patch Set 2 or PS2 BPM Suite 11gR1 11.1.1.3 10.3.3 Release 1 or R1 OSB 11gR1 11.1.1.3 10.3.3 Release 1 or R1 Hope this helps some people, if you find it useful you could always send me a Marathon bar, sorry Snickers!

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Thank You MySQL Community! MySQL 5.6.9 Release Candidate Available Now!

    - by Rob Young
    The MySQL Community continues its good work in testing and refining MySQL 5.6, and as such the next iteration of the 5.6 Release Candidate is now available for download.  You can get MySQL 5.6.9 here (look under the "Development Releases" tab).  This version is the result of feedback we have gotten since MySQL 5.6.7 was announced at MySQL Connect in late September. As iron sharpens iron, Community feedback sharpens the quality and performance of MySQL so please download 5.6.9 and let us know how we can improve it as we move toward the production-ready product release in early 2013. MySQL 5.6 is designed to meet the agility demands of the next generation of web apps and services and includes across the board improvements to the Optimizer, InnoDB performance/scale and online DDL operations, self-healing Replication, Performance Schema Instrumentation, Security and developer enabling NoSQL functionality.  You can learn all the details and follow MySQL Engineering blogs on all of the key features in this MySQL DevZone article. On a related note, plan to join this week's live webinars to learn more about MySQL 5.6 Self-Healing Replication Clusters and Building the Next Generation of Web, Cloud, SaaS, Embedded Application and Services with MySQL 5.6.  Hurry!  Seating is limited!  As always, thanks for your continued support of MySQL!

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Ein produktives Hobby von mir

    - by user13366195
    Oops, I did it again... Auch wenn ich seit langer Zeit Projektarbeit im Hardwaregeschäft mache, bin ich doch leidenschaftlicher Softwareentwickler. Meine ersten Programme habe ich 1981 in ein Matheheft geschrieben, noch bevor ich Zugang zu einem Rechner hatte. Später habe ich einige Programme sogar als Shareware für Geld verkauft: Wer kennt noch ARV, das revolutionäre Dateienverwaltungsprogramm, das Dateien automatisch nach Themen soriert auf Disketten organisiert, oder T-Kal, den einfachen und benutzerfreundlichen Terminkalender? Alle waren wirtschaftlich weniger erfolgreich, was wenig wundert. Letztendlich waren es Programme, die ich für mich geschrieben hatte, und nur aus Interesse an den betrieblichen und steuerlichen Prozessen, die mit dem Vertrieb verbunden sind, zum Verkauf angeboten habe.  Nun habe ich es wieder getan. Wer mag, kann sich das Ergebnis unter http://www.dw-aufgaben.de  ansehen.

    Read the article

  • YouTube: Tips by Bitwise Courses on NetBeans

    - by Geertjan
    I really like the potential of YouTube in providing a platform for short info clips that take not much time to produce and about as much time to consume. Huw Collingbourne's Bitwise Courses channel is full of exactly this kind of YouTube clip. Several of his YouTube clips are about or make use of NetBeans. The related Twitter account is @bitwisecourses and the homepage is bitwisecourses.com. Here's a great example, the latest YouTube clip created by Bitwise Courses. Very clear and simple explanation, on a specific and narrow topic, and very short and sweet. And very useful! Didn't know about this feature myself. Direct link to the movie: https://www.youtube.com/watch?v=b0fKT_hFQpU Here's to more of these, they're wonderful. More such YouTube clips are needed, short and precise, on very specific topics. And I'm very happy to promote them, as you can see.

    Read the article

  • Ops Center zip documentation

    - by Owen Allen
    If you're operating in a dark site, or are otherwise without easy access to the internet, it can be tricky to get access to the docs. The readme comes along with the product, but that's not exactly the same as the whole doc library. Well, we've put a zip file with the whole doc library contents up on the main doc page. So, if you are in a site without internet access, you can get the zip, extract it, and have a portable version of the site, including the pdf and html versions of all of the docs.

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >