Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 442/679 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • EBS Concurrent Processing Information Center

    - by LuciaC
    Do you have problems or questions about concurrent request processing?  Do you want to know: How and when to run CP Diagnostics? What are the latest Hot Topics being looked at for Concurrent Processing? All about the Concurrent Process Analyzer self-service Health-Check script? Go to the EBS Concurrent Processing Information Center (Doc ID 1304305.1) and find out the above and lots more!

    Read the article

  • Getting Started Plugging into the "Find in Projects" Dialog

    - by Geertjan
    In case you missed it amidst all the code in yesterday's blog entry, the "Find in Projects" dialog is now pluggable. I think that's really cool. The code yesterday gives you a complete example, but let's break it down a bit and deconstruct down to a very simple hello world scenario. We'll end up with as many extra tabs in the "Find in Projects" dialog as we need, for example, three in this case:  And clicking on any of those extra tabs will, in this simple example, simply show us this: Once we have that, we'll be able to continue adding small bits of code over the next few blog entries until we have something more useful. So, in this blog entry, you'll literally be able to display "Hello World" within a new tab in the "Find in Projects" dialog: import javax.swing.JComponent; import javax.swing.JLabel; import org.netbeans.spi.search.provider.SearchComposition; import org.netbeans.spi.search.provider.SearchProvider; import org.netbeans.spi.search.provider.SearchProvider.Presenter; import org.openide.NotificationLineSupport; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = SearchProvider.class) public class ExampleSearchProvider1 extends SearchProvider { @Override public Presenter createPresenter(boolean replaceMode) { return new ExampleSearchPresenter(this); } @Override public boolean isReplaceSupported() { return false; } @Override public boolean isEnabled() { return true; } @Override public String getTitle() { return "Demo Extension 1"; } public class ExampleSearchPresenter extends SearchProvider.Presenter { private ExampleSearchPresenter(ExampleSearchProvider1 sp) { super(sp, true); } @Override public JComponent getForm() { return new JLabel("Hello World"); } @Override public SearchComposition composeSearch() { return null; } @Override public boolean isUsable(NotificationLineSupport nls) { return true; } } } That's it, not much code, works fine in NetBeans IDE 7.2 Beta, and is easier to digest than the big chunk from yesterday. If you make three classes like the above in a NetBeans module, and you install it, you'll have three new tabs in the "Find in Projects" dialog. The only required dependencies are Dialogs API, Lookup API, and Search in Projects API. Read the javadoc linked above and then in next blog entries we'll continue to build out something like the sample you saw in yesterday's blog entry.

    Read the article

  • Oops, I left my kernel zone configuration behind!

    - by mgerdts
    Most people use boot environments to move in one direction.  A system starts with an initial installation and from time to time new boot environments are created - typically as a result of pkg update - and then the new BE is booted.  This post is of little interest to those people as no hackery is needed.  This post is about some mild hackery. During development, I commonly test different scenarios across multiple boot environments.  Many times, those tests aren't related to the act of configuring or installing zone and I so it's kinda handy to avoid the effort involved of zone configuration and installation.  A somewhat common order of operations is like the following: # beadm create -e golden -a test1 # reboot Once the system is running in the test1 BE, I install a kernel zone. # zonecfg -z a178 create -t SYSsolaris-kz # zoneadm -z a178 install Time passes, and I do all kinds of stuff to the test1 boot environment and want to test other scenarios in a clean boot environment.  So then I create a new one from my golden BE and reboot into it. # beadm create -e golden -a test2 # reboot Since the test2 BE was created from the golden BE, it doesn't have the configuration for the kernel zone that I configured and installed.  Getting that zone over to the test2 BE is pretty easy.  My test1 BE is really known as s11fixes-2. root@vzl-212:~# beadm mount s11fixes-2 /mnt root@vzl-212:~# zonecfg -R /mnt -z a178 export | zonecfg -z a178 -f - root@vzl-212:~# beadm unmount s11fixes-2 root@vzl-212:~# zoneadm -z a178 attach root@vzl-212:~# zoneadm -z a178 boot On the face of it, it would seem as though it would have been easier to just use zonecfg -z a178 create -t SYSolaris-kz within the test2 BE to get the new configuration over.  That would almost work, but it would have left behind the encryption key required for access to host data and any suspend image.  See solaris-kz(5) for more info on host data.  I very commonly have more complex configurations that contain many storage URIs and non-default resource controls.  Retyping them would be rather tedious.

    Read the article

  • Webcast: Oracle Solaris 11 – Innovations for Your Datacenter

    - by Cinzia Mascanzoni
    Invite your partners to view this pre-recorded online event to learn how new advances in Oracle Solaris 11.1 can help them deliver a secure, scalable, mission-critical cloud. These three informative and practical sessions will also present the extended high-availability and disaster recovery capabilities of Oracle Solaris Cluster, Register now for the event and watch the Oracle Solaris 20th Anniversary video.

    Read the article

  • Partitioning Strategies for P6 Reporting Database

    - by Jeffrey McDaniel
    Prior to P6 Reporting Database version 3.2 sp1 range partitioning was used. This was applied only to the history tables. The ranges were defined during installation and additional ranges would need to be added once your date range entered the final defined range. As of P6 Reporting Database version 3.2 sp1, interval partitioning was implemented. Interval partitioning was applied to the existing History table as well as Slowly Changing Dimension tables. One of the major advantages of interval partitioning is there is no more manual addition of ranges. The interval partitioning will automatically create partitions for the defined interval when data is inserted into the table and it exceeds the existing partitions. In 3.2 sp1 there are steps on how to update your partitioning. For all versions after 3.2 sp1 interval partitioning is the only partitioning option used. When upgrading it is important to be aware of these changes. Here is a link with more information on partitioning -the types and the advantages. http://docs.oracle.com/cd/E11882_01/server.112/e25523/partition.htm

    Read the article

  • Video White Paper: Successful Maintenance Management Strategies for Oil & Gas Projects

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Watch this short video white paper to learn how you can optimize your daily and routine maintenance with Oracle Primavera’s project portfolio management solution. You can also Register and read the full white paper “Optimizing Daily and Routine Maintenance through Project Portfolio Management” to discover how to: Capture best practices to successfully manage daily and routine maintenance projects. Keep your equipment running longer and more efficiently.

    Read the article

  • Collabnet Subversion and Self Signed Certificates

    - by Robert May
    We installed Collabnet as our subversion server recently.  This is the first time that we’ve used it.  In general, it seems pretty good, but we ran into a problem with it.  People were getting the following error in Tortoise: OPTIONS of ’https://xxxx.xxxxxxxx.xxxx/svn/xxxxx’: SSL handshake failed: SSL error code – 1/1/336032856 (https://xxxx.xxxxxxxx.xxxx) The odd thing is that for some people, it worked, for others, it didn’t!  I also couldn’t find anything useful out on the internet. We had checked the Subversion Server should serve via https option in the settings, and all of the ports were open, etc. This option causes a self signed certificate to be used. What we discovered: Tortoise must use the same url as is in the Hostname field on the General settings for collabnet or you’ll get this error.  Basically, some people were using https://svn.xxxxxxx.xxxxx and others were using https://computername.xxxxxxxx.xxxx.  Because the host name said used the computer name version, the whole thing broke.  By changing the host name to the svn version, which is what they should be using, the problem went away.  The users do get the “Accept Certificate” prompt, but we can live with that! Technorati Tags: Subversion,Collabnet

    Read the article

  • Projet Doneness and Einstein's Razor

    - by Malcolm Anderson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} I’ve started working on a series of articles about the value of having testers involved in requirements gathering.  Today I was reminded of a useful tool that has provided value to me for at least 20 years.  To those of you who already use this tool, I’m interested in your stories where it has made a difference for you, and to those of you who have never heard of it, I hope sharing it will make a difference in your careers.   I was reminded of it because I just finished a 3 month set of personal projects and was reviewing the success of those projects while putting together my next set of 3 month projects.  During this review, I noticed that a good number of my projects did not have the level of success that I wanted.  The results were good, but they could have been better.  Then it hit me, I didn’t have clear enough doneness criteria.  As a Scrum Practitioner, I wouldn’t think of running a sprint without reviewing the backlog with Einstein's Razor, so why wouldn’t I do the same for my own projects?    I can hear a few of you asking "What's Einstein's Razor?"   I'm glad you asked.  I was once told that Einstein told an audience, "If you can't explain what you do to a relatively bright six year old, you probably don't understand it yourself."    This quote had an impact on me, especially early in my career as a solo developer.  At the time, I was mostly doing end to end software development.  I found that I saved myself a lot of pain and trouble by turning that quote around to “If you can't explain your project's doneness criteria in such a way that a relatively bright six year old can't competently determine your projects success or failure, then you have not broken it down to a fine enough level.”  There are more negatives in that quote than I’m happy with, but it still gives me tons of value to this day.     In your opinion, in your current projects, could a 6 year old competently pass or fail your next sprint?  What risks are you running if your answer is “No” ?

    Read the article

  • A Small Blog About Huge Pages

    - by rickramsey
    Video Interview: What Are Linux Huge Pages?, by Ed Whalen, Oracle ACE Blog: There's Been a Change In How Huge Pages Are Allocated, by Tanel Poder, Oracle ACE Director Blog: Performance Issues with Transparent Huge Pages (thank you, Bjoern Rost!) Web: About the Car, by Smart Ridez LLC, of Woodland Hills, California - Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • Private Cloud: Putting some method behind the madness

    - by Sudip Datta
    Finally, I decided to join the blogging community. And what could be a better time to start than the week after OpenWorld 2012. 50K+ attendees, demonstrations, speaker sessions and a whole lot of buzz on Oracle Cloud..It was raining clouds in this year's Openworld. I am not here to write about Oracle's cloud strategy in general, but on Enterprise Manager's cloud management capabilities. This year's Openworld was the first after we announced the 12c Cloud Control and we were happy to share the stage with quite a few early adopters. Stay tuned for videos from our customers and partners, I will post them as they get published. I met a number of platform administrators in Oracle-DBAs, Middleware Admins, SOA Admins...The cloud has affected them all, at least to the point where it beckoned more than just curiosity..Most IT infrastructure are already heavily virtualized (on VMWare and on others including Oracle VM), and some would claim they are already on “cloud” (at least their Sysadmins told them so). But none of them were confident of the benefits because their pain points continued to grow.. Isn't cloud supposed to ease those? Instead, they were chasing hundreds of databases running on hundreds of VMs, often with as much certainty propounded by Heisenberg. What happened to the age-old IT discipline around administration, compliance, configuration management? VMs are great for what they are. I personally think they have opened the doors to new approaches in which an application stack gets provisioned and updated. In fact, Enterprise Manager 12c is possibly the only tool out there that can provision full-fledged application as VM Assemblies. In this year's Openworld, customers talked on how they provisioned RAC and Siebel assemblies, which as the techies out there know, are not trivial (hearing provisioning time for Siebel down from weeks to hours was gratifying indeed). However, I do have an issue with a "one-size fits all" approach to cloud. In a week's span, I met several personas: Project owners requiring an EC2 like VM instance for their projects Admins needing the same for Sparc-Solaris. DBAs requiring dedicated databases for new projects APEX Developers needing just a ready-to-consume schema as a service Java Developers looking for a runtime platform QA engineers needing a fast clone of their production environment If you drill down further, you will end up peeling more layers of the details. For example, the requirements for Load testing and Functional testing are very different. For Load testing the test environment should ideally be the same as the production. You shouldn't run production on Exadata and load test on a VM; they will just not be good representations of one another. For Functional testing it does not possibly matter. DBAs seem to be at the worst affected of the lot. It seems they have been asked to choose between agile provisioning and  faster runtime performance. And in some cases, it is really a Hobson's choice, because their infrastructure provider made no distinction between the OLTP application and the Virtual desktop! Sad indeed. When one looks at the portfolio of services that we already offer (vanilla IaaS, VM Assembly based PaaS, DBaaS) or have announced (Java PaaS, Instant Cloning, Schema-aaS), one can possibly think that we are trying to be the "renaissance man" ! Well I would have possibly digested that had it not been for the various personas that I described above. Getting the use cases right is very important for an application such as cloud management. We iterate and iterate over these over and over again and re-validate them in CABs (Customer Advisory Boards). We consider over the major aspects of tenancy: service placement, resource isolation (can a tenant execute an expensive SQL and run away with all the resources), quota and security. We, in Engineering, keep reminding ourselves that we are dealing with enterprise clouds. We owe it to our customer base ! In the coming posts, I will drill down more into each of the services. In the meanwhile, here are some collateral and  demos for starters with EM 12c. http://www.oracle.com/technetwork/oem/cloud-mgmt/index.html Sudip Datta The views expressed here are my own and do not necessarily reflect the views of Oracle. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter --

    Read the article

  • Quick run through of the WP7 Developer Tools January 2011

    - by mbcrump
    In case you haven’t heard the latest WP7 Developers Tool update was released yesterday and contains a few goodies. First you need to go and grab the bits here. You can install them in any order, but I installed the WindowsPhoneDeveloperResources_en-US_Patch1.msp first. Then the VS10-KB2486994-x86.exe. They install silently. In other words, you would need to check Programs and Features and look in Installed Updates to see if they installed successfully. Like the screenshot below: Once you get them installed you can try out a few new features. Like Copy and Paste. Just fire up your application and put a TextBox on it and Select the Text and you will have the option highlighted in red above the text. Once you select it you will have the option to paste it. (see red rectangle below). Another feature is the Windows Phone Capability Detection Tool – This tool detects the phone capabilities used by your application. This will prevent you from submitting an app to the marketplace that says it uses x feature but really does not. How do you use it? Well navigate out to either directory: %ProgramFiles%\Microsoft SDKs\Windows Phone\v7.0\Tools\CapDetect %ProgramFiles (x86)%\Microsoft SDKs\Windows Phone\v7.0\Tools\CapDetect and run the following command: CapabilityDetection.exe Rules.xml YOURWP7XAPFILEOUTPUTDIRECTORY So, in my example you will see my app only requires the ID_CAP_MICROPHONE. Let’s see what the WmAppManifest.xml says in our WP7 Project: Whoa! That’s a lot of extra stuff we don’t need. We can delete unused capabilities safely now. Some of the other fixes are: (Copied straight from Microsoft) Fixes a text selection bug in pivot and panorama controls. In applications that have pivot or panorama controls that contain text boxes, users can unintentionally change panes when trying to copy text. To prevent this problem, open your application, recompile it, and then resubmit it to the Windows Phone Marketplace. Windows Phone Connect Tool – Allows you to connect your phone to a PC when Zune® software is not running and debug applications that use media APIs. For more information, see How to: Use the Connect Tool. Updated Bing Maps Silverlight Control – Includes improvements to gesture performance when using Bing™ Maps Silverlight® Control. Windows Phone Developer Tools Fix allowing deployment of XAP files over 64 MB in size to physical phone devices for testing and debugging. That’s pretty much it. Thanks again for reading my blog!  Subscribe to my feed CodeProject

    Read the article

  • An alternative way to request read reciepts

    - by lavanyadeepak
    An alternative way to request read reciepts Sometime or other we use messaging namespaces like System.Net.Mail or System.Web.Mail to send emails from our applications. When we would need to include headers to request delivery or return reciepts (often called as Message Disposition Notifications) we lock ourselves to the limitation that not all email servers/email clients can satisfy this. We can enhance this border a little now, thanks to a new innovation I discovered from Gawab. It embeds a small invisible image of 1x1 dimension and the image source reads as recieptimg.php?id=2323425324. When this image is requested by the web browser or email client, the serverside handler does a smart mapping based on the ID to indicate that the message was read. We call them as 'Web Bugs'. But wait it is not a fool proof solution since spammers misuse this technique to confirm activeness of an email address and most of the email clients suppress inline images for security reasons. I just thought anyway would share this observation for the benefit of others.

    Read the article

  • SQL Server Express Profiler

    - by David Turner
    During a recent project, while waiting for our Development Database to be provisioned on the clients corporate SQL Server Environment (these things can sometimes take weeks or months to be setup), we began our initial development against a local instance on SQL Server Express, just as an interim measure until the Development database was live.  This was going just fine, until we found that we needed to do some profiling to understand a problem we were having with the performance of our ORM generated Data Access Layer.  The full version of SQL Server Management Studio includes a profiler, that we could use to help with this kind of problem, however the Express version does not, so I was really pleased to find that there is a freely available Profiler for SQL Server Express imaginatively titled ‘SQL Server Express Profiler’, and it worked great for us.  http://sites.google.com/site/sqlprofiler/

    Read the article

  • Call For Papers Tips and Tricks

    - by speakjava
    This year's JavaOne session review has just been completed and by now everyone who submitted papers should know whether they were successful or not.  I had the pleasure again this year of leading the review of the 'JavaFX and Rich User Experiences' track.  I thought it would be useful to write up a few comments to help people in future when submitting session proposals, not just for JavaOne, but for any of the many developer conferences that run around the world throughout the year.  This also draws on conversations I recently had with various Java User Group leaders at the Oracle User Group summit in Riga.  Many of these leaders run some of the biggest and most successful Java conferences in Europe. Try to think of a title which will sound interesting.  For example, "Experiences of performance tuning embedded Java for an ARM architecture based single board computer" probably isn't going to get as much attention as "Do you like coffee with your dessert? Java on the Raspberry Pi".  When thinking of the subject and title for your talk try to steer clear of sessions that might be too generic (and so get lost in a group of similar sessions).  Introductory talks are great when the audience is new to a subject, but beware of providing sessions that are too basic when the technology has been around for a while and there are lots of tutorials already available on the web. JavaOne, like many other conferences has a number of fields that need to be filled in when submitting a paper.  Many of these are selected from pull-down lists (like which track the session is applicable to).  Check these lists carefully.  A number of sessions we had needed to be shuffled between tracks when it was thought that the one selected was not appropriate.  We didn't count this against any sessions, but it's always a good idea to try and get the right one from the start, just in case. JavaOne, again like many other conferences, has two fields that describe the session being submitted: abstract and summary.  These are the most critical to a successful submission.  The two fields have different names and that is significant; a frequent mistake people make is to write an abstract for a session and then duplicate it for the summary.  The abstract (at least in the case of JavaOne) is what gets printed in the show guide and is typically what will be used by attendees when deciding what sessions to attend.  This is where you need to sell your session, not just to the reviewers, but also the people who you want in your audience.  Submitting a one line abstract (unless it's a really good one line) is not usually enough to decide whether this is worth investing an hour of conference time.  The abstract typically has a limit of a few hundred characters.  Try to use as many of them as possible to get as much information about your session across.  The summary should be different from the abstract (and don't leave it blank as some people do).  This field is where you can give the reviewers more detail about things like the structure of the talk, possible demonstrations and so on.  As a reviewer I look to this section to help me decide whether the hard-sell of the title and abstract will actually be reflected in the final content.  Try to make this comprehensive, but don't make it excessively long.  When you have to review possibly hundreds of sessions a certain level of conciseness can make life easier for reviewers and help the cause of your session. If you've not made many submissions for talks in the past, or if this is your first, try to give reviewers places to find background on you as a presenter.  Having an active blog and Twitter handle can also help reviewers if they're not sure what your level of expertise is.  Many call-for-papers have places for you to include this type of information.  It's always good to have new and original presenters and presentations for conferences.  Hopefully these tips will help you be successful when you answer the next call-for-papers.

    Read the article

  • Webcast - C-level Perspectives on How HR Can Take on a Bigger Role in Strategic Planning

    - by Scott Ewart
    The Economist Intelligence Unit (EIU), on behalf of IBM and Oracle, recently surveyed a number of C-level executives in North America and Western Europe to understand how HR can take on a bigger role in driving growth. The resulting reports highlight the actions senior HR leaders can take to place themselves at the heart of the debate on a company's strategic direction.In this session, IBM and Oracle HCM specialists will review the findings of the EIU research reports and provide guidance on how technology innovation can help to align talent strategies with long term business goals. Participants will gain an understanding of the following: Results of the Economist Intelligence Unit study around "Executive Perceptions of the HR Function" Differences in perspective between CEOs and CFOs Identify how the HR professional can take a bigger role in driving business growth Join us on Thursday, October 25 for a live webcast. Speakers:Gina Wells Global Oracle HCM LeaderIBM Global Business Services Michelle NewellSenior Director, HCM Applications MarketingOracle Register Here For the Webcast on Thursday, October 25.

    Read the article

  • Groovy Refactoring in NetBeans

    - by Martin Janicek
    Hi guys, during the NetBeans 7.3 feature development, I spend quite a lot of time trying to get some basic Groovy refactoring to the game. I've implemented find usages and rename refactoring for some basic constructs (class types, fields, properties, variables and methods). It's certainly not perfect and it will definitely need a lot fixes and improvements to get it hundred percent reliable, but I need to start somehow :) I would like to ask all of you to test it as much as possible and file a new tickets to the cases where it doesn't work as expected (e.g. some occurrences which should be in usages isn't there etc.) ..it's really important for me because I don't have real Groovy project and thus I can test only some simple cases. I can promise, that with your help we can make it really useful for the next release. Also please be aware that the current version is focusing only on the .groovy files. That means it won't find any usages from the .java files (and the same applies for finding usages from java files - it won't find any groovy usages). I know it's not ideal, but as I said.. we have to start somehow and it wasn't possible to make it all-in-one, so only other option was to wait for the NetBeans 7.4. I'll focus on better Java-Groovy integration in the next release (not only in refactoring, but also in navigation, code completion etc.) BTW: I've created a new component with surprising name "Refactoring" in our bugzilla[1], so please put the reported issues into this category. [1] http://netbeans.org/bugzilla/buglist.cgi?product=groovy;component=Refactoring

    Read the article

  • Total Cloud Control for Systems - Webcast on April 12, 2012 (18:00 CET/5pm UK)

    - by Javier Puerta
    Total Cloud Control Keeps Getting BetterJoin Oracle Vice President of Systems Management Steve Wilson and a panel of Oracle executives to find out how your enterprise cloud can achieve 10x improved performance and 12x operational agility. Only Oracle Enterprise Manager Ops Center 12c allows you to: Accelerate mission-critical cloud deployment Unleash the power of Solaris 11, the first cloud OS Simplify Oracle engineered systems management You’ll also get a chance to have your questions answered by Oracle product experts and dive deeper into the technology by viewing our demos that trace the steps companies like yours take as they transition to a private cloud environment. Register today for this interactive keynote and panel discussion. Agenda 18:00 a.m. CET (5pm UK) Keynote: Total Cloud Control for Systems 18:45 a.m. CET (5:45 pm UK) Panel Discussion with Oracle Hardware, Software, and Support Executives 19:15 a.m. CET (6:15 UK) Demo Series: A Step-by-Step Journey to Enterprise Clouds

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • The Use-Case Driven Approach to Change Management

    - by Lauren Clark
    In the third entry of the series on OUM and PMI’s Pulse of the Profession, we took a look at the continued importance of change management and risk management. The topic of change management and OUM’s use-case driven approach has come up in few recent conversations. So I thought I would jot down a few thoughts on how the use-case driven approach aids a project team in managing the project’s scope. The use-case model is one of several tools in OUM that is used to establish and manage the project's scope.  Because a use-case model can be understood by both business and IT project team members, it can serve as a bridge for ongoing collaboration as well as a visual diagram that encapsulates all agreed-upon functionality. This makes it a vital artifact in identifying changes to the project’s scope. Here are some of the primary benefits of using the use-case model as part of the effort for establishing and managing project scope: The use-case model quickly communicates scope in a straightforward manner. All project stakeholders can have a common foundation for the decisions regarding architecture and design and how they relate to the project's objectives. Once agreed upon, the model can be put under change control and any updates to the model can then be quickly identified as potentially affecting the project’s scope.  Changes requested or discovered later in the project can be analyzed objectively for their impact on project's budget, resources and schedule. A modular foundation for the design of the software solution can be established in Elaboration.  This permits work to be divided up effectively and executed in so that the most important and riskiest use-cases can be tackled early in the project. The use-case model helps the team make informed decisions about implementation priorities, which allows effective allocation of limited project resources.  This is very helpful in not only managing scope, but in doing iterative and incremental planning which relies heavily on the ability to identify project priorities. Bottom line is that the use-case model gives the project team solid understanding of scope early in the project.  Combine this understanding with effective project management and communication and you have an effective tool for reducing the risk of overruns in budget and/or time due to out of control scope changes. Now that you’ve had a chance to read these thoughts on the use-case model and project scope, please let me know your feedback based on your experience.

    Read the article

  • New Oracle University Courses for June 2014 - Just Released!

    - by user12601713
    -Written by the Oracle University Marketing team  At Oracle University, we work hard to make sure our training and certification portfolio is current. Here's a quick summary of our hottest new courses.  Top Course Releases for June 1)   Oracle BI 11g R1: Build Repositories 2)   Oracle WebCenter Sites 11g for System Administrators 3)   Oracle Database 12c: Clusterware Administration 4)   Oracle Database 12c: ASM Administration 5)   Oracle Enterprise Manager Ops Center 12c R2 Virtualizing Systems 6)   XML Fundamentals Ed 1.1 7)   PeopleSoft Global Payroll Rel 9.2 8)   Oracle Agile 9.3.3 Administrator 9)   Oracle Agile 9.3.3 Product Collaboration This is just a sample of what we expect to be popular new releases. Explore all of Oracle's training and certifications at education.oracle.com.  Courses can be taken in the classroom or online. Choose your learning method based on your schedule.

    Read the article

  • Devoxx Belgium - CFP Closes On July 5th

    - by Yolande Poirier
    The biggest Java conference in Europe is taking place in Antwerp, Belgium from November 11 to 15, 2013. The conference is designed by developers for developers and attracts renowned international speakers. The review committee looks for passionate speakers who are technically knowledgeable and not afraid to speak in front of a full room of Devoxxians. The speakers can increase CFP acceptance rate by submitting one or more talks for Tools in Action, Quickie, BOF, University session, Conference and Hands On Labs sessions.

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • Kalculate = math + fun

    - by Devin A. Rychetnik
    Kalculate is a you vs. the Internet style game for math lovers. The rules are simple: answer as many math problems as you can in 90 seconds. At the end of each round, Kalculate will tally up all the scores and show you where you ranked relative to others currently playing.Tip: answering 3 questions in 10 seconds earns you a score multiplier      If you prefer to just practice and stay out of the competition, there's an offline mode that allows you to play solo.Kalculate is free (ad-supported) and can be downloaded here.

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >