Search Results

Search found 14871 results on 595 pages for 'oracle partner'.

Page 517/595 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Ops Center 12c - Update - Provisioning Solaris on x86 Using a Card-Based NIC

    - by scottdickson
    Last week, I posted a blog describing how to use Ops Center to provision Solaris over the network via a NIC on a card rather than the built-in NIC.  Really, that was all about how to install Solaris on a SPARC system.  This week, we'll look at how to do the same thing for an x86-based server. Really, the overall process is exactly the same, at least for Solaris 11, with only minor updates. We will focus on Solaris 11 for this blog.  Once I verify that the same approach works for Solaris 10, I will provide another update. Booting Solaris 11 on x86 Just as before, in order to configure the server for network boot across a card-based NIC, it is necessary to declare the asset to associate the additional MACs with the server.  You likely will need to access the server console via the ILOM to figure out the MAC and to get a good idea of the network instance number.  The simplest way to find both of these is to start a network boot using the desired NIC and see where it appears in the list of network interfaces and what MAC is used when it tries to boot.  Go to the ILOM for the server.  Reset the server and start the console.  When the BIOS loads, select the boot menu, usually with Ctrl-P.  This will give you a menu of devices to boot from, including all of the NICs.  Select the NIC you want to boot from.  Its position in the list is a good indication of what network number Solaris will give the device. In this case, we want to boot from the 5th interface (GB_4, net4).  Pick it and start the boot processes.  When it starts to boot, you will see the MAC address for the interface Once you have the network instance and the MAC, go through the same process of declaring the asset as in the SPARC case.  This associates the additional network interface with the server.. Creating an OS Provisioning Plan The simplest way to do the boot via an alternate interface on an x86 system is to do a manual boot.  Update the OS provisioning profile as in the SPARC case to reflect the fact that we are booting from a different interface.  Update, in this case, the network boot device to be GB_4/net4, or the device corresponding to your network instance number.  Configure the profile to support manual network boot by checking the box for manual boot in the OS Provisioning profile. Booting the System Once you have created a profile and plan to support booting from the additional NIC, we are ready to install the server. Again, from the ILOM, reset the system and start the console.  When the BIOS loads, select boot from the Boot Menu as above.  Select the network interface from the list as before and start the boot process.  When the grub bootloader loads, the default boot image is the Solaris Text Installer.  On the grub menu, select Automated Installer and Ops Center takes over from there. Lessons The key lesson from all of this is that Ops Center is a valuable tool for provisioning servers whether they are connected via built-in network interfaces or via high-speed NICs on cards.  This is great news for modern datacenters using converged network infrastructures.  The process works for both SPARC and x86 Solaris installations.  And it's easy and repeatable.

    Read the article

  • Latest Business Analytics Support News has been released

    - by paul.a
    The latest edition of Business Analytics Support News is now available. You can read our quarterly newsletter Volume 5 Doc ID 1347131.1Featured topics include details on Smartview 64-Bit Support Patch Set Update information for various products Social Media for EPM Documentation plus more Did you miss any of the newsletters and want to find archived copies?  Volumes one to four are all listed on the News Index Doc ID 1347159.1 To ensure you don't miss out on future releases and recent changes you can subscribe to the Product News.More details about the subscription are outlined in the volume 5 "Subscribe to Product Support News" section.

    Read the article

  • Write and fprintf for file I/O

    - by Darryl Gove
    fprintf() does buffered I/O, where as write() does unbuffered I/O. So once the write() completes, the data is in the file, whereas, for fprintf() it may take a while for the file to get updated to reflect the output. This results in a significant performance difference - the write works at disk speed. The following is a program to test this: #include <fcntl.h #include <unistd.h #include <stdio.h #include <stdlib.h #include <errno.h #include <stdio.h #include <sys/time.h #include <sys/types.h #include <sys/stat.h static double s_time; void starttime() { s_time=1.0*gethrtime(); } void endtime(long its) { double e_time=1.0*gethrtime(); printf("Time per iteration %5.2f MB/s\n", (1.0*its)/(e_time-s_time*1.0)*1000); s_time=1.0*gethrtime(); } #define SIZE 10*1024*1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT,S_IWGRP|S_IWOTH|S_IWUSR); for (int i=0; i<SIZE; i++) { write(file,"a",1); } close(file); endtime(SIZE); } void test_fprintf() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); } fclose(file); endtime(SIZE); } void test_flush() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); fflush(file); } fclose(file); endtime(SIZE); } int main() { test_write(); test_fprintf(); test_flush(); } Compiling and running I get 0.2MB/s for write() and 6MB/s for fprintf(). A large difference. There's three tests in this example, the third test uses fprintf() and fflush(). This is equivalent to write() both in performance and in functionality. Which leads to the suggestion that fprintf() (and other buffering I/O functions) are the fastest way of writing to files, and that fflush() should be used to enforce synchronisation of the file contents.

    Read the article

  • WebCenter Content Web Search Performance: Do you really need that folder path info?

    - by Nicolas Montoya
    End-users want content at their fingertips at the speed of thought if possible. When running search operations in the WebCenter Conter Web Interface every second or fraction of a second improvement does matter. When doing some trace analysis on the systemdatabase tracing on a customer environment, we came across some SQL queries that were unnecessarily being triggered! These were related to determining the folder path for every entry part of the search result set. However, this folder path was not even being used as part of the displayed information in the user interface.Why was the folder path information being collected when it was not even displayed in the UI? We found that the configuration parameter 'FolderPathInSearchResults' was set to 'true' under Administration > Admin Server > General Configuration > Additional Configuration Variables as shown below:When executing a quicksearch by keyword we were getting 100 out of 2280 entries in the first page of the result set.When thera 'FolderPathInSearchResults' configuration parameter is set to 'true', the following queries appear in the systemdatabase tracing:100 executions for a query on the FolderFiles table for each of the documents displayed in the first page:>systemdatabase/6       12.13 11:17:48.188      IdcServer-199   1.45 ms. SELECT * FROM FolderFiles WHERE dDocName='SLC02VGVUSORAC140641' AND fLinkRank=0[Executed. Returned row(s): true]382 executions for a query of the folders tables - most of the documents that match the keyword criteria are at a folder depth level of three or four:>systemdatabase/6       12.13 11:17:48.114      IdcServer-199   2.57 ms. SELECT FolderFolders.*,FolderMetaDefaults.* FROM FolderFolders,FolderMetaDefaults WHERE FolderFolders.fFolderGUID=FolderMetaDefaults.fFolderGUID(+) AND((FolderFolders.fFolderGUID = '1EB8E527E19B09ED3FE82EE310AEA13A' ) )[Executed.Returned row(s): true]By setting this 'FolderPathInSearchResults' configuration parameter to 'false', the above queries were no longer reported in the Server Output System Audit Information.Now, let's consider a practical scenario:Search result set page = 100Average folder depth der document in the search result set: 5The number of folder path related queries will be: 100 + 5*500 = 600If each query takes slightly over 3 ms. You would have 2000 ms (2 seconds) spent in server time to get this information.The overall performance impact goes beyond seerver time execution, as this information needs to travel from the server to the browser. If the documents are further nested into the folder hierarchy, additional hundreds of queries may be executed. If folder path is not being displayed in the end-user interface profile, your system may be better of with the 'FolderPathInSearchResults' configuration parameter disabled.

    Read the article

  • UPK version 3.1 goes GA!

    Hear Russell Handley, Director, UPK Product Marketing, discuss the much anticipated release of UPK 3.1, and how it can benefit enterprises of all sizes, across all geographies.

    Read the article

  • How (and when) to move users to mysqli and PDO_MYSQL?

    - by cj
    An important discussion on the PHP "internals" development mailing list is taking place. It's one that you should take some note of. It concerns the next step in transitioning PHP applications away from the very old mysql extension and towards adopting the much better mysqli extension or PDO_MYSQL driver for PDO. This would allow the mysql extension to, at some as-yet undetermined time in the future, be removed. Both mysqli and PDO_MYSQL have been around for many years, and have various advantages: http://php.net/manual/en/mysqlinfo.api.choosing.php The initial RFC for this next step is at https://wiki.php.net/rfc/mysql_deprecation I would expect the RFC to change substantially based on current discussion. The crux of that discussion is the timing of the next step of deprecation. There is also discussion of the carrot approach (showing users the benfits of moving), and stick approach (displaying warnings when the mysql extension is used). As always, there is a lot of guesswork going on as to what MySQL APIs are in current use by PHP applications, how those applications are deployed, and what their upgrade cycle is. This is where you can add your weight to the discussion - and also help by spreading the word to move to mysqli or PDO_MYSQL. An example of such a 'carrot' is the excellent summary at Ulf Wendel's blog: http://blog.ulf-wendel.de/2012/php-mysql-why-to-upgrade-extmysql/ I want to repeat that no time frame for the eventual removal of the mysql extension is set. I expect it to be some years away.

    Read the article

  • Mojarra (JSF) 2.1.2 is here

    - by alexismp
    The Mojarra 2.1.2 release was cut a few days ago. Here are the full Maven coordinates : api, impl. You can also get to the release notes and to the list of bugs fixed in this release. This is scheduled for inclusion into the upcoming GlassFish 3.1.1 release. In fact it's already integrated in the latest promoted build (#8) which also includes woodstox 4.1.1. Weld 1.1.1.Final has already been integrated a few builds ago. The JSF team is now working on JSR 344 (JSF 2.2) for which you can get a status by visiting http://jsf-spec.java.net/ and the associated mailing lists. A first expert draft is now available.

    Read the article

  • Discovery methods

    - by Owen Allen
    In Ops Center, asset discovery is a process in which the software determines what assets exist in your environment. You can't monitor an asset, or do anything to it through Ops Center, until it's discovered. I've seen a couple of questions about how to discover various types of asset, so I thought I'd explain the discovery methods and what they each do. Find Assets - This discovery method searches for service tags on all known networks. Service tags are small files on some hardware and operating systems that provide basic identification info. Once a service tag has been found, you provide credentials to manage the asset. This method can discover assets quickly, but only if the target assets have service tags. Add Assets with discovery profile - This method lets you specify targets by providing IP addresses, IP ranges, or hostnames, as well as the credentials needed to connect to and manage these assets. You can create discovery profiles for any type of asset. Declare asset - This method lets you specify the details of a server, with or without a configured service processor. You can then use Ops Center to install a new operating system or configure the SP. This method works well for new hardware. These methods are all discussed in more detail in the Asset Management chapter of the Feature Reference guide.

    Read the article

  • Solaris 11 LKSF

    - by nospam(at)example.com (Joerg Moellenkamp)
    After having some discussions i now made my mind about it: In the next weeks you will see many republications of old articles in the blog as i will republish all articles in the LKSF, however checked and updated for Solaris 11 (some Opensolaris based stuff in the lksf is working slightly different, and if it's just for different package names). However this will take time, as i will do this on weekends and evenings. At the end i will just recollect them and create a Solaris LKSF pdf again.

    Read the article

  • New Java ME security app, Rapid Tracker, is now full version

    - by hinkmond
    Rapid Protect has updated it's Java ME security app to be the full version now instead of a dumbed down version that ran on feature phones. Now, that's progress! See: Full Rapid Tracker on Java ME Here's a quote: Rapid Protect, a leading company focused on mobile based safety, security and collaboration space announces major feature enhancements to its award winning "Rapid Tracker" mobile applications. In addition to many new features, it announced availability of Full Rapid Tracker application on J2ME non-smart feature phones. Hmmm... "on J2ME non-smart feature phones". I wonder if by "non-smart" they mean another word... Perhaps, "non-iDrone-Anphoid"? Hinkmond

    Read the article

  • Tell Us What YOU Need To Know

    - by Harold Green
    We're continuing to develop new Exam Preparation Seminars, and we want to know -- what is a technical question you would like an instructor to address in the video? What is a weak point you need help with? What is a specific topic you would really like us to focus on in the video seminar? Visit our web survey (BELOW) to pose your questions to our instructors. We'll address as many questions as we can, focusing on the most relevant and most popular questions. ASK HERE

    Read the article

  • The Benefits of Upgrading to PeopleSoft 9.0

    Doris Wong, Vice President and General Manager of PeopleSoft Enterprise speaks with Fred about how PeopleSoft 9.0 fits into Applications Unlimited, what the key enhancements are in release 9.0 and why PeopleSoft customers should consider upgrading to this new release.

    Read the article

  • JavaOne Community Keynote Videos

    - by Tori Wieldt
    If you weren't able to attend JavaOne 2012 in San Francisco, one of the high points was the Community Keynote on the last day. It was by the community and for the community. It included a visit from James Goling, demos, and community members describing what they've been up to. You can watch highlights: or watch the full keynote: The continued innovation of Java requires the full engagement, participation, and collaboration of the Java community. Well done!

    Read the article

  • Maker Faire Report - Teaching Kids Java SE Embedded for Internet of Things (IoT)

    - by hinkmond
    I had a great time at this year's Maker Faire 2014 in San Mateo, Calif. where Jake Kuramoto and the AppsLab crew including Noel Portugal, Anthony Lai, Raymond, and Tony set up a super demo at the DiY table. It was a simple way to learn how Java SE Embedded technology could be used to code the Internet of Things (IoT) devices on the table. The best part of our set-up was seeing the kids sit down and do some coding without all the complexity of a Computer Science course. It was very encouraging to see how interested the kids were when walking them through the programming steps, then seeing their eyes light up when telling them, "You just coded a Java enabled Internet of Things device!" as the Raspberry Pi-connected devices turned on or started to move from their Java Embedded program. See: The AppsLab at Maker Faire It will be interesting to see how this next generation of kids grow up with all these Internet of Things devices around them and watch how they will program them. Hopefully, they will be using Java SE Embedded technology to do so. From the looks of it at this year's Maker Faire, we might have a bunch of motivated young Java SE Embedded coders coming up the ranks soon. Well, they have to get through middle school first, but they're on their way! Hinkmond

    Read the article

  • LDAP ACI Debugging

    - by user13332755
    If you've ever wondered which ACI in LDAP is used for a special ADD/DELETE/MODIFY/SEARCH request you need to enable ACI debugging to get details about this. Edit/Modify dse.ldifnsslapd-infolog-area: 128nsslapd-infolog-level: 1ACI Logging will be placed at 'errors' file, looks like: [22/Jun/2011:15:25:08 +0200] - INFORMATION - NSACLPlugin - conn=-1 op=-1 msgId=-1 -  Num of ALLOW Handles:15, DENY handles:0 [22/Jun/2011:15:25:08 +0200] - INFORMATION - NSACLPlugin - conn=-1 op=-1 msgId=-1 -  Processed attr:nswmExtendedUserPrefs for entry:uid=mparis,ou=people,o=vmdomain.tld,o=isp [22/Jun/2011:15:25:08 +0200] - INFORMATION - NSACLPlugin - conn=-1 op=-1 msgId=-1 -  Evaluating ALLOW aci index:33 [22/Jun/2011:15:25:08 +0200] - INFORMATION - NSACLPlugin - conn=-1 op=-1 msgId=-1 -  ALLOW:Found READ ALLOW in cache [22/Jun/2011:15:25:08 +0200] - INFORMATION - NSACLPlugin - conn=-1 op=-1 msgId=-1 -  acl_summary(main): access_allowed(read) on entry/attr(uid=mparis,ou=people,o=vmdomain.tld,o=isp, nswmExtendedUserPrefs) to (uid=msg-admin-redzone.vmdomain.tld-20100927093314,ou=people,o=vmdomain.tld,o=isp) (not proxied) (reason: result cached allow , deciding_aci  "DA anonymous access rights", index 33)

    Read the article

  • Will you choose JavaFX for Development?

    - by javafx4you
    A few weeks ago, a poll on the home page of java.net caught my eyes, because it was related to JavaFX. Its title: Will you use JavaFX for development once it's fully ported to Mac and Linux platforms? Usually, the results for this type of polls are published on the editor's Daily Blog soon after the poll closes. For some reason, this didn't happen for the JavaFX poll, so I'll take a shot at interpreting the results.  The results found on java.net look pretty close to the following: Although this way to look at the results already gives us an idea of how much traction JavaFX is getting, there are just too many type of answers that make it hard to read. The answers "maybe" and "I don't know" are awfully similar, so I'm tempted to collapse these together. Then there is "No, I don't do that type of development" that just doesn't belong here, as obviously developers who ave chosen this answer don't develop Rich Internet Apps, and therefore I will adapt the % results accordingly. Finally, I've been tempted to combine the top three categories just t simplify the results. This gives me the following chart:  Whether you prefer the original graph, or my simplified take on it, one thing is sure:  less than 10% of developers who have taken this poll plan to stick to another toolkit (presumably Swing or SWT), while the vast majority is inclined to use JavaFX. When you take into account that JavaFX 2.0 is pretty much a "new" API (no more JavaFX Script), I think these are some pretty good results, 6 months after the official release of JavaFX 2.0.

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. [Read More] 

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.   To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings. If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs. #Change log size to 10MB and number of logs to 20FileSizeLimit=10485760FileCountLimit=20 This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

    Read the article

  • Dealing with Fine-Grained Cache Entries in Coherence

    - by jpurdy
    On occasion we have seen significant memory overhead when using very small cache entries. Consider the case where there is a small key (say a synthetic key stored in a long) and a small value (perhaps a number or short string). With most backing maps, each cache entry will require an instance of Map.Entry, and in the case of a LocalCache backing map (used for expiry and eviction), there is additional metadata stored (such as last access time). Given the size of this data (usually a few dozen bytes) and the granularity of Java memory allocation (often a minimum of 32 bytes per object, depending on the specific JVM implementation), it is easily possible to end up with the case where the cache entry appears to be a couple dozen bytes but ends up occupying several hundred bytes of actual heap, resulting in anywhere from a 5x to 10x increase in stated memory requirements. In most cases, this increase applies to only a few small NamedCaches, and is inconsequential -- but in some cases it might apply to one or more very large NamedCaches, in which case it may dominate memory sizing calculations. Ultimately, the requirement is to avoid the per-entry overhead, which can be done either at the application level by grouping multiple logical entries into single cache entries, or at the backing map level, again by combining multiple entries into a smaller number of larger heap objects. At the application level, it may be possible to combine objects based on parent-child or sibling relationships (basically the same requirements that would apply to using partition affinity). If there is no natural relationship, it may still be possible to combine objects, effectively using a Coherence NamedCache as a "map of maps". This forces the application to first find a collection of objects (by performing a partial hash) and then to look within that collection for the desired object. This is most naturally implemented as a collection of entry processors to avoid pulling unnecessary data back to the client (and also to encapsulate that logic within a service layer). At the backing map level, the NIO storage option keeps keys on heap, and so has limited benefit for this situation. The Elastic Data features of Coherence naturally combine entries into larger heap objects, with the caveat that only data -- and not indexes -- can be stored in Elastic Data.

    Read the article

  • Attending MySQL Connect? Your Opinion Matters.

    - by Monica Kumar
    Take the MySQL Connect 2012 Survey Thanks to everyone who is at the first ever MySQL Connect Conference in San Francisco this weekend! Don't forget to take your Conference and Session Surveys. Your opinions help shape next year's conference. Take a survey for each of the sessions you attend and be entered into a drawing for one prize for $200 American Express Gift Certificate. Fill in the daily conference survey and be entered into a drawing for one prize for a $500 American Express Gift Card Surveys are located here. Make your opinion count! Take the survey now. Congratulations to Robin Schumacher from DataStax as he is the winner of the Saturday survey!

    Read the article

  • Skynet Big Data Demo Using Hexbug Spider Robot, Raspberry Pi, and Java SE Embedded (Part 4)

    - by hinkmond
    Here's the first sign of life of a Hexbug Spider Robot converted to become a Skynet Big Data model T-1. Yes, this is T-1 the precursor to the Cyberdyne Systems T-101 (and you know where that will lead to...) It is demonstrating a heartbeat using a simple Java SE Embedded program to drive it. See: Skynet Model T-1 Heartbeat It's alive!!! Well, almost alive. At least there's a pulse. We'll program more to its actions next, and then finally connect it to Skynet Big Data to do more advanced stuff, like hunt for Sara Connor. Java SE Embedded programming makes it simple to create the first model in the long line of T-XXX robots to take on the world. Raspberry Pi makes connecting it all together on one simple device, easy. Next post, I'll show how the wires are connected to drive the T-1 robot. Hinkmond

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >