Search Results

Search found 14418 results on 577 pages for 'andyl oracle'.

Page 505/577 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • Great Java EE Concurrency Write-up!

    - by reza_rahman
    As you are aware JSR-236, Concurrency Utilities for the Java EE platform, is now a candidate for addition into Java EE 7. While it is a critical enabling API it is not necessarily obvious why it is so important. This is especially true with existing features like EJB 3 @Asynchronous, Servlet 3 async and JAX-RS 2 async. On his blog DZone MVB Sander Mak does an excellent job of explaining the motivation and importance of JSR-236. Perhaps even more importantly, he discusses potential issues with the API such alignment with CDI and Java SE Fork/Join. Read the excellent write-up here!

    Read the article

  • Using the Java SE 8 Date Time API with JPA 2.1

    - by reza_rahman
    Most of you are hopefully aware of the new Date Time API included in Java SE 8. If you are not, you should check them out right now using the Java Tutorial Trail dedicated to the topic. It is a significantly leap forward in processing temporal data in Java. For those who already use Joda-Time the changes will look very familiar - very simplistically speaking the Java SE 8 feature is basically Joda-Time standardized. Quite naturally you will likely want to use the new Date Time APIs in your JPA domain model to better represent temporal data. The problem is that JPA 2.1 will not support the new API out of the box. So what are you to do? Fortunately you can make use of fairly simple JPA 2.1 Type Converters to use the Date Time API in your JPA domain classes. Steven Gertiser shows you how to do it in an extremely well written blog entry. Besides explaining the problem and the solution the entry is actually very good for getting a better understanding of JPA 2.1 Type Converters as well. I think such a set of converters may be a good fit for Apache DeltaSpike as a Java EE 7 extension? In case you are wondering about Java SE 8 support in the JPA specification itself, Nick Williams has already entered an excellent, well researched JIRA entry asking for such support in a future version of the JPA specification that's well worth looking at. Another possibility of course is for JPA providers to start supporting the Date Time API natively before anything is formalized in the specification. What do you think?

    Read the article

  • Why are embedded device apps still written in C/C++? Why not Java programming language?

    - by hinkmond
    At the recent Black Hat 2014 conference in Sin City, the Black Hatters were focusing on Embedded Devices and IoT. You know? Make your networked-toaster burn your bread 10,000 miles away, over the Web for grins and giggles. Well, apparently the Black Hatters say it can be done pretty easily these days, which is scary. See: Securing Embedded Devices & IoT Here's a quote: All these devices are still written in C and C++. The challenges associated with developing securely in these languages have been fought for nearly two decades. "You often hear people say, 'Well, why don't we just get rid of the C and C++ language if it's so problematic. Why don't we just write everything in C# or Java, or something that is a little safer to develop in?'," DeMott says. Gah! Why are all these IoT devices still using C/C++? Of course they should be using Java SE Embedded technology! It's a natural fit to use for better security on embedded devices. Or, I guess, developers really don't mind if their networked-toasters do char their breakfast. If it can be burned, it will be... That's what I say. Unless they use Java. Hinkmond

    Read the article

  • Transparency call for Spec Leads and EC materials posted

    - by heathervc
    The materials and recording from the February 2012 call for JCP program Spec Leads is now available.  This call features Martijn Verburg, alternate EC representative for the London Java Community and includes information on the Adopt-a-JSR program.  The materials and audio recording of the  "Leveraging the Community" call can be found on the multimedia page of jcp.org .  The EC meeting summaries from February and March 2012 have also been posted.  Following the April 2012 EC Meeting this morning (minutes and materials will be posted soon), there are now four EC Members that have lost their voting privileges--AT&T, SK Telecom, Samsung and Twitter.  In order to regain their privileges, these EC Members must attend two EC meeting in a row, as detailed in the EC Standing Rules.

    Read the article

  • Skynet Big Data Demo Using Hexbug Spider Robot, Raspberry Pi, and Java SE Embedded (Part 3)

    - by hinkmond
    In Part 2, I described what connections you need to make for this demo using a Hexbug Spider Robot, a Raspberry Pi, and Java SE Embedded for programming. Here are some photos of me doing the soldering. Software engineers should not be afraid of a little soldering work. It's all good. See: Skynet Big Data Demo (Part 2) One thing to watch out for when you open the remote is that there may be some glue covering the contact points. Make sure to use an Exacto knife or small screwdriver to scrape away any glue or non-conductive material covering each place where you need to solder. And after you are done with your soldering and you gave the solder enough time to cool, make sure all your connections are marked so that you know which wire goes where. Give each wire a very light tug to make sure it is soldered correctly and is making good contact. There are lots of videos on the Web to help you if this is your first time soldering. Check out Laday Ada's (from adafruit.com) links on how to solder if you need some additional help: http://www.ladyada.net/learn/soldering/thm.html If everything looks good, zip everything back up and meet back here for how to connect these wires to your Raspberry Pi. That will be it for the hardware part of this project. See, that wasn't so bad. Hinkmond

    Read the article

  • Management Software in Java for Networked Bus Systems

    - by Geertjan
    Telemotive AG develops complex networked bus systems such as Ethernet, MOST, CAN, FlexRay, LIN and Bluetooth as well as in-house product developments in infotainment, entertainment, and telematics related to driver assistance, connectivity, diagnosis, and e-mobility. Devices such as those developed by Telemotive typically come with management software, so that the device can be configured. (Just like an internet router comes with management software too.) The blue AdmiraL is a development and analysis device for the APIX (Automotive Pixel Link) technology. Here is its management tool: The blue PiraT is an optimised multi-data logger, developed by Telemotive specifically for the automotive industry. With the blue PiraT the communication of bus systems and control units are monitored and relevant data can be recorded very precisely. And here is how the tool is managed: Both applications are created in Java and, as clearly indicated in many ways in the screenshots above, are based on the NetBeans Platform. More details can be found on the Telemotive site.

    Read the article

  • Framework Folders and Duplicate File Names

    - by Kevin Smith
    I have been working with Framework folders a little bit in the past few days and found one unexpected behavior that is different from Contribution Folders (Folders_g). If you try and check a file into a Framework Folder that already exists in the folder it will allow it and rename the file for you. In Folders_g this would have generated an error and prevented you from checking in the file. A quick check of the Framework Folder configuration settings in the Application Administrator’s Guide for Content Server does not show a configuration parameter to control this. I'm still thinking about this and not sure if I like this new behavior or not. I guess from a user perspective this more closely aligns Framework Folders to how Windows handle duplicate file names, but if you are migrating from Folders_g and expect a duplicate file name to be rejected, this might cause you some problems.

    Read the article

  • Parleys Testimonial at GlassFish Community Event, JavaOne 2012

    - by arungupta
    Parleys.com is an e-learning platform that provide a unique experience of online and offline viewing presentations, with integrated movies and chaptering, from the top notch developer conferences and about 40 JUGs all around the world. Stephan Janssen (the Devoxx man and Parleys webmaster) presented at the GlassFish Community Event at JavaOne 2012 and shared why they moved from Tomcat to GlassFish. The move paid off as GlassFish was able to handle 2000 concurrent users very easily. Now they are also running Devoxx CFP and registration on this updated infrastructure. The GlassFish clustering, the asadmin CLI, application versioning, and JMS implementation are some of the features that made them a happy user. Recently they migrated their application from Spring to Java EE 6. This allows them to get locked into proprietary frameworks and also avoid 40MB WAR file deployments. Stateless application, JAX-RS, MongoDB, and Elastic Search is their magical forumla for success there. Watch the video below showing him in full action: More details about their infrastructure is available here.

    Read the article

  • Dynamic Layout in BI Publisher

    - by Manoj Madhusoodanan
    This blog tells about how to set dynamic layout of a BI Publisher report.Lets take a simple business scenario.If user wants to view the output in either PDF or EXCEL.Two ways we can achieve this.  1) If the output type is not a program parameter the we can choose the layout type at the time of submission in the SRS window. 2) If the report output type is a parameter we need to choose the layout at the time of submission of the request using FND_REQUEST.ADD_LAYOUT. Here I am discussing the second approach.Following components I have created for this scenario. 1) XXCUST - Sample Report : This program actually generates the XML data. It is linked with a RTF template.Note: At the time of creating this program you can uncheck Use in SRS to prevent users from submitting from SRS window.Because we are submitting this program from another concurrent program.Also is not required to add it to request group.2) XXCUST - Report Wrapper : This program calls XXCUST - Sample Report and exposed to user through SRS.3) SAMPLE_REPORT_WRAPPER : Executable which calls xxcust_sample_rep.main4) SAMPLE_REPORT_EXE : Executable which calls xxcust_sample_rep.report_main5) XXCUST - Sample Report Data Definition : Data Definition linked to XXCUST - Sample Report6) XXCUST - Sample Report Template : Template linked to XXCUST - Sample Report Data Definition 7) Package 8) RTF

    Read the article

  • Do we have enough time to build an electric car future?

    - by julien.groues
    A recent article from Greenbang has posed the question 'Do we have enough time to build an electric car future?'. The writer discusses that, although the future of transport might lie with electric cars, there is concern regarding whether we'll be able to build the market and infrastructure required to support them, before carbon and oil constraints create difficulties in powering the vehicles. Of course, the increasing use of Electric vehicles (EVs) is going to put excessive pressure on energy grids, as large volumes of electricity will need to be directed to charging points, which in turn must handle fluctuating demand at peak times. EVs are increasing in popularity as a sustainable method of transport to reduce carbon consumption, and electric utilities will have the opportunity, and the challenge, to quickly determine the best methods to fuel these vehicles and accommodate the associated increases in demand for energy. Critically, efficient software is required to provide diagnostic and predictive capabilities related to EV refuelling - for example, anticipated electricity flow will need to be addressed as the number of EVs on the road increases, and electricity will need to be directed to specific areas on-demand as vehicles attempt to recharge en-mass. But a smart grid infrastructure can meet these demands, intelligently. The implementation of a smart grid is not in the distant future, it is an achievable reality for utilities via simple installation of new software and technologies, which can be done incrementally for those facing existing legacy systems or concerned with upfront costs. The smart grid is integral to the monitoring and control of energy use as well as the future-proofing of the energy grid. A smart grid will be critical to meeting the electricity requirements of new EVs and will ensure their successful deployment by providing a reliable foundation for the data handling required to record and manage electricity distribution - from recording and assessing energy usage, to analysing data and sharing information with consumers via green billing. http://www.greenbang.com/do-we-have-enough-time-to-build-an-electric-car-future_14248.html

    Read the article

  • JavaOne Content Catalog Live!

    - by programmarketingOTN
    The JavaOne Content Catalog—the central repository for information on sessions, demos, labs, user groups, exhibitors, and more for San Francisco 2012—is live!In the Content Catalog you can search on tracks, session types, session categories, keywords, and tags. Or, you can search for your favorite speakers to see what they’re presenting this year. And, directly from the catalog, you can share sessions you’re interested in with friends and colleagues through a broad array of social media channels.Start checking out JavaOne content now to plan your week at the conference. Then you’ll be ready to sign up for all of your sessions in mid-July when the scheduling tool goes live. Happy browsing! 

    Read the article

  • SQL Saturday 43 (Redmond, WA) Review

    - by BuckWoody
    Last Saturday (June 12th) we held a “SQL Saturday” (more about those here) event in Redmond, Washington. The event was held at the Microsoft campus, at the Mixer in our new location called the “Commons”. This is a mall-like area that we have on campus, and the Mixer is a large building with lots of meeting rooms, so it made a perfect location for the event. There was a sign to find the parking, and once there they had a sign to show how to get to the building. Since it’s a secure facility, Greg Larsen and crew had a person manning the door so that even late arrivals could get in. We had about 400 sign up for the event, and a little over 300 attend (official numbers later). I think we would have had a lot more, but the sun was out – and you just can’t underestimate the effect of that here in the Pacific Northwest. We joke a lot about not seeing the sun much, but when a day like what we had on Saturday comes around, and on a weekend at that, you’d cancel your wedding to go outside to play in the sun. And your spouse would agree with you for doing it. We had some top-notch speakers, including Clifford Dibble and Kalen Delany. The food was great, we had multiple sponsors (including Confio who seems to be at all of these) and the attendees were from all over the professional spectrum, from developers to BI to DBA’s. Everyone I saw was very engaged, and when I visited room-to-room I saw almost no one in the halls – everyone was in the sessions. I also saw a much larger Microsoft presence this year, especially from Dan Jones’ team. I had a great turnout at my session, and yes, I was wearing an Oracle staff shirt. I did that because I wanted to show that the session I gave on “SQL Server for the Oracle DBA” was non-marketing – I couldn’t exactly bash Oracle wearing their colors! These events are amazing. I can’t emphasize enough how much I appreciate the volunteers and how much work they put into these events, and to you for coming. If you’re reading this and you haven’t attended one yet, definitely find out if there is one in your area – and if not, start one. It’s a lot of work, but it’s totally worth it.       Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • JCP 2.9 & Transparency Spec Lead Call material is available

    - by Heather VanCura
    The JCP 2.9 & Transparency Spec Lead Call materials and recording from 9 November are now available on the JCP.org multimedia page.  Learn about changes introduced with JCP 2.9, effective Tuesday, 13 November, and a review of the JCP.Next reform efforts. Plus, a progress report on JCP 2.8, specifically around the areas of transparency, participation and agility, as well as suggestions for how you can get more involved in supporting these efforts with the current JCP program JSRs. 

    Read the article

  • RPi and Java Embedded GPIO: Connecting LEDs

    - by hinkmond
    Next, we need some low-level peripherals to connect to the Raspberry Pi GPIO header. So, we'll do what's called a "Fry's Run" in Silicon Valley, which means we go shop at the local Fry's Electronics store for parts. In this case, we'll need some breadboard jumper wires (blue wires in photo), some LEDs, and some resistors (for the RPi GPIO, 150 ohms - 300 ohms would work for the 3.3V output of the GPIO ports). And, if you want to do other projects, you might as well by a breadboard, which is a development board with lots of holes in it. Ask a Fry's clerk for help. Or, better yet, ask the customer standing next to you in the electronics components aisle for help. (Might be faster) So, go to your local hobby electronics store, or go to Fry's if you have one close by, and come back here to the next blog post to see how to hook these parts up. Hinkmond

    Read the article

  • NetBeans IDE 7.3 Knows Null

    - by Geertjan
    What's the difference between these two methods, "test1" and "test2"? public int test1(String str) {     return str.length(); } public int test2(String str) {     if (str == null) {         System.err.println("Passed null!.");         //forgotten return;     }     return str.length(); } The difference, or at least, the difference that is relevant for this blog entry, is that whoever wrote "test2" apparently thinks that the variable "str" may be null, though did not provide a null check. In NetBeans IDE 7.3, you see this hint for "test2", but no hint for "test1", since in that case we don't know anything about the developer's intention for the variable and providing a hint in that case would flood the source code with too many false positives:  Annotations are supported in understanding how a piece of code is intended to be used. If method return types use @Nullable, @NullAllowed, @CheckForNull, the value is considered to be "strongly possible to be null", as well as if the variable is tested to be null, as shown above. When using @NotNull, @NonNull, @Nonnull, the value is considered to be non-null. (The exact FQNs of the annotations are ignored, only simple names are checked.) Here are examples showing where the hints are displayed for the non-null hints (the "strongly possible to be null" hints are not shown below, though you can see one of them in the screenshot above), together with a comment showing what is shown when you hover over the hint: There isn't a "one size fits all" refactoring for these various instances relating to null checks, hence you can't do an automated refactoring across your code base via tools in NetBeans IDE, as shown yesterday for class member reordering across code bases. However, you can, instead, go to Source | Inspect and then do a scan throughout a scope (e.g., current file/package/project or combinations of these or all open projects) for class elements that the IDE identifies as potentially having a problem in this area: Thanks to Jan Lahoda, who reports that this currently also works in NetBeans IDE 7.3 dev builds for fields but that may need to be disabled since right now too many false positives are returned, for help with the info above and any misunderstandings are my own fault!

    Read the article

  • Essbase - FormatString

    - by THE
    A look at the documentation for "Typed Measures" shows:"Using format strings, you can format the values (cell contents) of Essbase database members in numeric type measures so that they appear, for query purposes, as text, dates, or other types of predefined values. The resultant display value is the cell’s formatted value (FORMATTED_VALUE property in MDX). The underlying real value is numeric, and this value is unaffected by the associated formatted value."To actually switch ON the use of typed measures in general, you need to navigate to the outline properties: open outline select properties change "Typed Measures enable" to TRUE (click to enlarge) As an example, I created two additional members in the ASOSamp outline. - A member "delta Price" in the Measures (Accounts) Dimension with the Formula: ([Original Price],[Curr_year])-([Original Price],[Prev_year])This is equivalent to the Variance Formula used in the "Years" Dimension. - A member "Var_Quickview" in the "Years" Dimension with the same formula as the "Variance" Member.This will be used to simply display a second cell with the same underlying value as Variance - but formatted using Format String hence enabling BOTH in the same report. (click to enlarge) In the outline you now select the member you want the Format String associated with and change the "associated Format String" in the Member Properties.As you can see in this example an IIF statement reading:MdxFormat(IIF(CellValue()< 0,"Negative","Positive" ) ) has been chosen for both new members.After applying the Format String changes and running a report via SmartView, the result is: (click to enlarge) reference: Essbase Database Admin Guide ,Chapter 12 "Working with Typed Measures "

    Read the article

  • Several New Hints

    - by Ondrej Brejla
    Hi all! Today we would like to introduce you some of our new experimental hints for NetBeans 7.2. They are called: Unused Use Statement and Immutable Variables. Unused Use Statement This hint is quite simple. It highlights (underlines) your use statements, which are not used. Typical use case is after some refactoring, when you forgot to remove some obsolete use statements. This hint warns you on them and allows you to remove them easily. Just click on the hint bulb in the gutter and select Remove Unused Use Statement. And of course, it works in multiple use statements combined too. Immutable Variables The next one is the hint which checks too many assignments into a variable. And why? That's simple. Mostly you should use just one assignment into one variable. But sometimes you are lazy and you do something like: But it's quite wrong, because what you really do is: And that's exactly the case, when our new hint warns you, that Too many assignments (2) into variable $foo occured. Nothing more. Yes, we know that there are some cases, where could be more assignments and no warning should occur, e.g.: Because maybe one likes longer increment syntax more than the short one. So we tried to handle these cases to don't bother you if it's not a need. Note: We are almost sure that this hint doesn't cover all your use cases, because there are a lot of them. So if you find something strange, write it into our bugzilla so we can handle it better for you. Thanks for your patience! And the last thing is, that you can set the number of allowed assignments in Tools -> Options -> Editor -> Hints -> PHP: Immutable Variables. Note: This hint works just for a common variables, not for fields. We have an enhancement request for that and it should be implemented in next version of NetBeans (probably 7.3). And that's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (product php, component Editor). Thanks.

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Schema Based Code Completion for NetBeans Platform Applications

    - by Geertjan
    Toni's recent blog entry provides, among several other interesting things, instructions for something I've been wanting to cover for a long time, which is schema based code completion: The above is a sample I created via Toni's tutorial, using the schema described here: http://www.w3schools.com/schema/schema_example.asp The support for the Navigator ain't bad either, especially considering I didn't do any coding at all to get all this: And here's where you can find the whole sample: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.2/misc/ShipOrder

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Selecting Items in a GeoToolkit Driven Map

    - by Geertjan
    When you take a look at all the tools provided by GeoToolkit, you'll be quite impressed. For example, within the US map shown in yesterday's blog entry, you can drill down into individual states by selecting them via the mouse, as shown below: With that, the basis of a more complex application is laid, since all the map-related functionality is handed to you out of the box. The sample referred to yesterday has been updated, if you check it out and run it (assuming you've taken the additional steps mentioned yesterday), you'll see the above. http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/tutorials/geospatial/geotoolkit/MyGeospatialSystem

    Read the article

  • GNOME 3.4 released, with smooth & fast magnification

    - by Peter Korn
    The GNOME community released GNOME 3.4 today. This release contains several new accessibility features, along with a new set of custom high-contrast icons which improve the user experience for users needing improved contrast. This release also makes available the AEGIS-funded GNOME Shell Magnifier. This magnifier leverages the powerful graphics functionality built into all modern video cards for smooth and fast magnification in GNOME. You can watch a video of that magnifier (with the previous version of the preference dialog), which shows all of the features now available in GNOME 3.4. This includes full/partial screen magnification, a magnifier lens, full or partial mouse cross hairs with translucency, and several mouse tracking modes. Future improvements planned for GNOME 3.6 include focus & caret tracking, and a variety of color/contrast controls.

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >