Search Results

Search found 10751 results on 431 pages for 'fast forward'.

Page 330/431 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • BizTalk 2009 - Architecture Decisions

    - by StuartBrierley
    In the first step towards implementing a BizTalk 2009 environment, from development through to live, I put forward a proposal that detailed the options available, as well as the costs and benefits associated with these options, to allow an informed discusion to take place with the business drivers and budget holders of the project.  This ultimately lead to a decision being made to implement an initial BizTalk Server 2009 environment using the Standard Edition of the product. It is my hope that in the long term, as projects require it and allow, we will be looking to implement my ideal recommendation of a multi-server enterprise level environment, but given the differences in cost and the likely initial work load for the environment this was not something that I could fully recommend at this time.  However, it must be noted that this decision was made in full awareness of the limits of the standard edition, and the business drivers of this project were made fully aware of the risks associated with running without the failover capabilities of the enterprise edition. When considering the creation of this new BizTalk Server 2009 environment, I have also recommended the creation of the following pre-production environments:   Usage Environment Development Development of solutions; Unit testing against technical specifications; Initial load testing; Testing of deployment packages;  Visual Studio; BizTalk; SQL; Client PCs/Laptops; Server environment similar to Live implementation; Test Testing of Solutions against business and technical requirements;  BizTalk; SQL; Server environment similar to Live implementation; Pseudo-Live As Live environment to allow testing against Live implementation; Acts as back-up hardware in case of failure of Live environment; BizTalk; SQL; Server environment identical to Live implementation; The creation of these differing environments allows for the separation of the various stages of the development cycle.  The development environment is for use when actively developing a solution, it is a potentially volatile environment whose state at any given time can not be guaranteed.  It allows developers to carry out initial tests in an environment that is similar to the live environment and also provides an area for the testing of deployment packages prior to any release to the test environment. The test environment is intended to be a semi-volatile environment that is similar to the live environment.  It will change periodically through the development of a solution (or solutions) but should be otherwise stable.  It allows for the continued testing of a solution against requirements without the worry that the environment is being actively changed by any ongoing development.  This separation of development and test is crucial in ensuring the quality and control of the tested solution. The pseudo-live environment should be considered to be an almost static environment.  It should mimic the live environment and can act as back up hardware in the case of live failure.  This environment acts as an area to allow for “as live” testing, where the performance and behaviour of the live solutions can be replicated.  There should be relatively few changes to this environment, with software releases limited to “release candidate” level releases prior to going live. Whereas the pseudo-live environment should always mimic the live environment, to save on costs the development and test servers could be implemented on lower specification hardware.  Consideration can also be given to the use of a virtual server environment to further reduce hardware costs in the development and test environments, indeed this virtual approach can also be extended to pseudo-live and live assuming the underlying technology is in place. Although there is no requirement for the development and test server environments to be identical to live, the overriding architecture implemented should be the same as in live and an understanding must be gained of the performance differences to be expected across the different environments.

    Read the article

  • A View from the Top – Jan Ackerman (VP APAC Recruiting)

    - by user769227
    This week, Headhunt Magazine in Singapore, took the opportunity to publish an interview with Jan Ackerman who is Vice President for Recruitment for Asia Pacific here at Oracle. The link to the online interview can be found here. Below is the interview in full that was published in Headhunt Magazine.  A View from the Top – Jan Ackerman Written by HeadHunt on August 16, 2012 · Leave a Comment By Susheela Menon Jan Ackerman is the Vice President for Recruiting in Asia Pacific and Japan at Oracle. Which particular personal trait do you attribute your professional success to? Perseverance has been the most important trait that has attributed to my professional success. Endurance and perseverance combined to win in the end has always been a great credo. I find that this trait carries through in my professional as well as my personal life. I enjoy sport fishing and find that perseverance with a great deal of patience in this hobby is critical to the overall enjoyment and success in this sporting activity. In the same way, this doggedness – steadfastness with persistence – and tenacity toward an unyielding course of action has served me well in reaching goals and thus greater success. What’s the biggest challenge you have faced in your career so far? I have to constantly keep pace with ever changing technology in my career. The industry changes rapidly and requires me to stay on top of the latest trends and advancements. Outside of work, I like to develop software as a hobby and in order to ensure that what I am developing will meet what the business needs, I have to continually innovate and stay current on the latest trends in the industry to deliver a solution that will delight the end- user. Best career advice you have ever received. Always be forthright and honest with your customers and peers; mixed with a “Can Do” attitude, a great and fulfilling career can be yours to have and hold. What makes Oracle a great place to be in? The freedom to innovate and pave new avenues of success is one of the greatest things about working here at Oracle. We are always looking to grow and improve our business for our customers and we are always adapting to present and future industry demands. This means we are always looking to change, to perform better and to do things differently. All these create a culture and spirit of innovation and success. What motivates you to be in the HR sector? I really like working with and helping people. HR is all about “the people” in the organisation, and staying focused every day on making things better for the Oracle team gives me a great deal of happiness. Describe your leadership style. I am very direct and goal- oriented. I provide ideas and guidance and then give the team all the freedom they need to reach a successful outcome. I can also be a very “roll up your sleeves” kind of manager when the task needs a bit of a push. What’s the biggest business challenge you see in your industry right now? The ability to keep pace with all the convergence in the industry and to continue to stay focused on delivering top talent to serve Oracle’s customers well. Our unique Recruiting Model has served us well in meeting these needs. We are well-placed in this goal and look forward to maintain Oracle’s leadership role in the industry.

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

  • Welcome 2011

    - by WeigeltRo
    Things that happened in 2010 MIX10 was absolutely fantastic. Read my report of MIX10 to see why.   The dotnet Cologne 2010, the community conference organized by the .NET user group Köln and my own group Bonn-to-Code.Net became an even bigger success than I dared to dream of.   There was a huge discrepancy between the efforts by Microsoft to support .NET user groups to organize public live streaming events of the PDC keynote (the dotnet Cologne team joined forces with netug  Niederrhein to organize the PDCologne) and the actual content of the keynote. The reaction of the audience at our event was “meh” and even worse I seriously doubt we’ll ever get that number of people to such an event (which on top of that suffered from technical difficulties beyond our control).   What definitely would have deserved the public live streaming event treatment was the Silverlight Firestarter (aka “Silverlight Damage Control”) event. And maybe we would have thought about organizing something if it weren’t for the “burned earth” left by the PDC keynote. Anyway, the stuff shown at the firestarter keynote was the topic of conversations among colleagues days later (“did you see that? oh yeah, that was seriously cool”). Things that I have learned/observed/noticed in 2010 In the long run, there’s a huge difference between “It works pretty well” and “it just works and I never have to think about it”. I had to get rid of my USB graphics adapter powering the third monitor (read about it in this blog post). Various small issues (desktop icons sometimes moving their positions after a reboot for no apparent reasons, at least one game I couldn’t get run at all, all three monitors sometimes simply refusing to wake up after standby) finally made me buy a PCIe 1x graphics adapter. If you’re interested: The combination of a NVIDIA GTX 460 and a GT 220 is running in “don’t make me think” mode for a couple of months now.   PowerPoint 2010 is a seriously cool piece of software. Not only the new hardware-accelerated effects, but also features like built-in background removal and picture processing (which in many cases are simply “good enough” and save a lot of time) or the smart guides.   Outlook 2010 crashes on me a lot. I haven’t been successful in reproducing these crashes, they just happen when every couple of days on different occasions (only thing in common: I clicked something in the main window – yeah, very helpful observation)   Visual Studio 2010 reminds me of Visual Studio 2005 before SP1, which is actually not a good thing to say about a piece of software. I think it’s telling that Microsoft’s message regarding the beta of SP1 has been different from earlier service pack betas (promising an upgrade path for a beta to the RTM sounds to me like “please, please use it NOW!”).   I have a love/hate relationship with ReSharper. I don’t want to develop without it, but at the same time I can’t fail to notice that ReSharper is taking a heavy toll in terms of performance and sometimes stability. Things I’m looking forward to in 2011 Obviously, the dotnet Cologne 2011. We already have been able to score some big name sponsors (Microsoft, Intel), but we’re still looking for more sponsors. And be assured that we’ll make sure that our partners get the most out of their contribution, regardless of how big or small.   MIX11, period.    Silverlight 5 is going to be great. The only thing I’m a bit nervous about is that I still haven’t read anything official on whether C# next version’s async/await will be in it. Leaving that out would be really stupid considering the end-of-2011 release of SL5 (moving the next release way into the future).

    Read the article

  • Pondering New Technology

    - by MOSSLover
    So I have been standing at the end of a fork for a year peering down a corner looking at this way and that way trying to figure out where I fit in.  I was so enthusiastic and excited about Silverlight when it came out.  It was this amazing awesome technology that had this really cool animation and webcam/multimedia piece.  I thought if I put my money on Silverlight it’s going somewhere, then HTML 5 came out and the wind shifted.  I realized times were changing. I have been working with web technologies since I was 15 years old.  Playing with html and javascript and even css back when it first came out.  In tech years 15 years is forever.  Things change so quickly and so often.  So I guess the question is where am I heading?  The answer is mobile technology.  For some reason I was resisting change and I have no idea why.  I guess I really wanted to see more than one player.  I didn’t quite feel that Microsoft was ready with Windows Phone 7.  It was a great start, but it just didn’t feel like they went all the way.  Now with Windows 8 it feels like they are at version 2.0.  It’s like hitting Silverlight 2.0 where they finally added the .Net bits.  The path is paved, but we don’t know where it’s leading.  Then we had 3.0, 4.0, and 5.0 to mature the technology into what it is today (man I’m hoping they are going to roll some of the cool bits into other tech if they don’t exist). Anyway, I’m on board, but I’m not buying a Windows Tablet just yet.  I was hoping for a 7 inch screen from Apple around $300 or just above and a 7 inch screen from the MS side around the same price.  What I got was the Apple side, but nothing from Microsoft.  I was pretty disappointed with the $500 market price on the RT version.  I realized Microsoft is close, but not quite where Apple is today.  Yes the devices have Office that they are offering, but the sticker is just too much for a first generation device.  If you guys remember correctly the first generation iPad was quite expensive.  I guess for a 1st generation device $500 is pretty good. So I guess what I’m trying to say is that I am shifting my focus entirely away from Silverlight and more towards mobile.  I will be doing a lot of postings on iOS, Windows 8, and Windows Phone 8 with SharePoint 2013.  Since I don’t have a tablet and don’t foresee myself buying one just yet it might be mostly on the phones for right now.  I want to do a bunch of testing on various devices on what needs to be done in apps on each device.  I might add a bit on porting code from one to the other.  I think it’s going to be a lot of fun and make things flow a little better for me.  In a way it’s kind of like Star Trek 6 where they talk about the Undiscovered Country.  I’m going to jump forward completely and see where I land. Technorati Tags: SharePoint 2013,Mobile,Windows 8,iOS

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • Doing your first mock with JustMock

    - by mehfuzh
    In this post, i will start with a  more traditional mocking example that  includes a fund transfer scenario between two different currency account using JustMock.Our target interface that we will be mocking looks similar to: public interface ICurrencyService {     float GetConversionRate(string fromCurrency, string toCurrency); } Moving forward the SUT or class that will be consuming the  service and will be invoked by user [provided that the ICurrencyService will be passed in a DI style] looks like: public class AccountService : IAccountService         {             private readonly ICurrencyService currencyService;               public AccountService(ICurrencyService currencyService)             {                 this.currencyService = currencyService;             }               #region IAccountService Members               public void TransferFunds(Account from, Account to, float amount)             {                 from.Withdraw(amount);                 float conversionRate = currencyService.GetConversionRate(from.Currency, to.Currency);                 float convertedAmount = amount * conversionRate;                 to.Deposit(convertedAmount);             }               #endregion         }   As, we can see there is a TransferFunds action implemented from IAccountService  takes in a source account from where it withdraws some money and a target account to where the transfer takes place using the provided conversion rate. Our first step is to create the mock. The syntax for creating your instance mocks is pretty much same and  is valid for all interfaces, non-sealed/sealed concrete instance classes. You can pass in additional stuffs like whether its an strict mock or not, by default all the mocks in JustMock are loose, you can use it as default valued objects or stubs as well. ICurrencyService currencyService = Mock.Create<ICurrencyService>(); Using JustMock, setting up your expectations and asserting them always goes with Mock.Arrang|Assert and this is pretty much same syntax no matter what type of mocking you are doing. Therefore,  in the above scenario we want to make sure that the conversion rate always returns 2.20F when converting from GBP to CAD. To do so we need to arrange in the following way: Mock.Arrange(() => currencyService.GetConversionRate("GBP", "CAD")).Returns(2.20f).MustBeCalled(); Here, I have additionally marked the mock call as must. That means it should be invoked anywhere in the code before we do Mock.Assert, we can also assert mocks directly though lamda expressions  but the more general Mock.Assert(mocked) will assert only the setups that are marked as "MustBeCalled()”. Now, coming back to the main topic , as we setup the mock, now its time to act on it. Therefore, first we create our account service class and create our from and to accounts respectively. var accountService = new AccountService(currencyService);   var canadianAccount = new Account(0, "CAD"); var britishAccount = new Account(0, "GBP"); Next, we add some money to the GBP  account: britishAccount.Deposit(100); Finally, we do our transfer by the following: accountService.TransferFunds(britishAccount, canadianAccount, 100); Once, everything is completed, we need to make sure that things were as it is we have expected, so its time for assertions.Here, we first do the general assertions: Assert.Equal(0, britishAccount.Balance); Assert.Equal(220, canadianAccount.Balance); Following, we do our mock assertion,  as have marked the call as “MustBeCalled” it will make sure that our mock is actually invoked. Moreover, we can add filters like how many times our expected mock call has occurred that will be covered in coming posts. Mock.Assert(currencyService); So far, that actually concludes our  first  mock with JustMock and do stay tuned for more. Enjoy!!

    Read the article

  • Using the SOA-BPM VIrtualBox Appliance

    - by antony.reynolds
    Quickstart Guide to Using Oracle Appliance for SOA/BPM Recently I have been setting up some machines for fellow engineers.  My base setup consists of Oracle Enterprise Linux with Oracle Virtual Box.  Note that after installing VirtualBox I needed to add the VirtualBox Extension Pack to enable RDP access amongst other features.  In order to get them started quickly with some images I downloaded the pre-built appliance for SOA/BPM from OTN. Out of the box this provides a VirtualBox image that is pre-installed with everything you will need to develop SOA/BPM applications. Specifically by using the virtual appliance I got the following pre-installed and configured. Oracle Enterprise Linux 5 User oracle password oracle User root password oracle. Oracle Database XE Pre-configured with SOA/BPM repository. Set to auto-start on OS startup. Oracle SOA Suite 11g PS2 Configured with a “collapsed domain”, all services (SOA/BAM/EM) running in AdminServer. Listening on port 7001 Oracle BPM Suite 11g Configured in same domain as SOA Suite. Oracle JDeveloper 11g With SOA/BPM extensions. Networking The VM by default uses NAT (Network Address Translation) for network access.  Make sure that the advanced settings for port forwarding allow access through the host to guest ports.  It should be pre-configured to forward requests on the following ports Purpose Host Port Guest Port (VBox Image) SSH 2222 22 HTTP 7001 7001 Database 1521 1521 Note that only one VirtualBox image can use a given host port, so make sure you are not clashing if it seems not to work. What’s Left to Do? There is still some customization of the environment that may be required. If you need to configure a proxy server as I did then for the oracle and root users to set up an HTTP proxy Added “export http_proxy=http://proxy-host:proxy-port” to ~oracle/.bash_profile and ~root/.bash_profile Added “export http_proxy=http://proxy-host:proxy-port” to /etc/.bashrc Edited System->Preferences to set Network Proxy In Firefox set Preferences->Network->Connection Settings to “Use system proxy settings” In JDeveloper set Edit->Preferences->Web Browser and Proxy to required proxy settings You may need to configure yum to point to a public OEL yum repository – such as http://public-yum.oracle.com. If you are going to be accessing the SOA server from outside the VirtualBox image then you may want to set the soa-infra Server URLs to be the hostname of the host OS. Snap! Once I had the machine configured how I wanted to use it I took a snapshot so that I can always get back to the pristine install I have now.  Snapshots are one of the big benefits of putting a development environment into a virtualized environment.  I can make changes to my installation and if I mess it up I can restore the image to a last known good snapshot. Hey Presto!, Ready to Go This is the quickest way to get up and running with SOA/BPM Suite.  Out of the box the download will work, I only did extra customization so I could use services outside the firewall and browse outside the firewall from within by SOA VirtualBox image.  I also use yum to update the OS to the latest binaries. So have fun.

    Read the article

  • Employee Info Starter Kit: Project Mission

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is an open source ASP.NET project template that is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, it illustrates how to utilize Microsoft ASP.NET 4.0, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule. where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. User Stories The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee record Update an existing employee record Delete existing employee records Key Technology Areas ASP.NET 4.0 Entity Framework 4.0 T-4 Template Visual Studio 2010 Architectural Objective There is no universal architecture which can be considered as the best for all sorts of applications around the world. Based on requirements, constraints, environment, application architecture can differ from one to another. Trade-off factors are one of the important considerations while deciding a particular architectural solution. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. “Productivity” as the architectural objective typically also includes other trade-off factors as well as, such as testability, flexibility, performance etc. Fortunately Microsoft .NET Framework 4.0 and Visual Studio 2010 includes lots of great features that have been implemented cleverly in this project to reduce these trade-off factors in the minimum level. Why Employee Info Starter Kit is Not a Framework? Application frameworks are really great for productivity, some of which are really unavoidable in this modern age. However relying too many frameworks may overkill a project, as frameworks are typically designed to serve wide range of different usage and are less customizable or editable. On the other hand having implementation patterns can be useful for developers, as it enables them to adjust application on demand. Employee Info Starter Kit provides hundreds of “connected” snippets and implementation patterns to demonstrate problem solutions in actual production environment. It also includes Visual Studio T-4 templates that generate thousands lines of data access and business logic layer repetitive codes in literally few seconds on the fly, which are fully mock testable due to language support for partial methods and latest support for mock testing in Entity Framework. Why Employee Info Starter Kit is Different than Other Open-source Web Applications? Software development is one of the rapid growing industries around the globe, where the technology is being updated very frequently to adapt greater challenges over time. There are literally thousands of community web sites, blogs and forums that are dedicated to provide support to adapt new technologies. While some are really great to enable learning new technologies quickly, in most cases they are either too “simple and brief” to be used in real world scenarios or too “complex and detailed” which are typically focused to achieve a product goal (such as CMS, e-Commerce etc) from "end user" perspective and have a long duration learning curve with respect to the corresponding technology. Employee Info Starter Kit, as a web project, is basically "developer" oriented which actually considers a hybrid approach as “simple and detailed”, where a simple domain has been considered to intentionally illustrate most of the architectural and implementation challenges faced by web application developers so that anyone can dive into deep into the corresponding new technology or concept quickly. Roadmap Since its first release by 2008 in MSDN Code Gallery, Employee Info Starter Kit gained a huge popularity in ASP.NET community and had 1, 50,000+ downloads afterwards. Being encouraged with this great response, we have a strong commitment for the community to provide support for it with respect to latest technologies continuously. Currently hosted in Codeplex, this community driven project is planned to have a wide range of individual editions, each of which will be focused on a selected application architecture, framework or platform, such as ASP.NET Webform, ASP.NET Dynamic Data, ASP.NET MVC, jQuery Ajax (RIA), Silverlight (RIA), Azure Service Platform (Cloud), Visual Studio Automated Test etc. See here for full list of current and future editions.

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • Silverlight Cream for December 27, 2010 -- #1016

    - by Dave Campbell
    In this Issue: Sacha Barber, David Anson, Jesse Liberty, Shawn Wildermuth, Jeff Blankenburg(-2-), Martin Krüger, Ryan Alford(-2-), Michael Crump, Peter Kuhn(-2-). Above the Fold: Silverlight: "Part 4 of 4 : Tips/Tricks for Silverlight Developers" Michael Crump WP7: "Navigating with the WebBrowser Control on WP7" Shawn Wildermuth Shoutouts: John Papa posted that the open call is up for MIX11 presenters: Your Chance to Speak at MIX11 From SilverlightCream.com: Aspect Examples (INotifyPropertyChanged via aspects) If you're wanting to read a really in-depth discussion of aspect oriented programming (AOP), check out the article Sacha Barber has up at CodeProject discussing INPC via aspects. How to: Localize a Windows Phone 7 application that uses the Windows Phone Toolkit into different languages David Anson has a nice tutorial up on localizing your WP7 app, including using the Toolkit and controls such as DatePicker... remember we're talking localized Windows Phone From Scratch – Animation Part 1 Jesse Liberty continues in his 'From Scratch' series with this first post on WP7 Animation... good stuff, Jesse! Navigating with the WebBrowser Control on WP7 In building his latest WP7 app, Shawn Wildermuth ran into some obscure errors surrounding browser.InvokeScript. He lists the simple solution and his back, refresh, and forward button functionality for us. What I Learned In WP7 – Issue #7 In the time I was out, Jeff Blankenburg got ahead of me, so I'll catch up 2 at a time... in this number 7 he discusses making videos of your apps, links to the Learn Visual Studio series, and his new website What I Learned In WP7 – Issue #8 Jeff Blankenburg's number 8 is a very cool tip on using the return key on the keyboard to handle the loss of focus and handling of text typed into a textbox. Resize of a grid by using thumb controls Martin Krüger has a sample in the Expression Gallery of a grid that is resizable by using 'thumb controls' at the 4 corners... all source, so check it out! Silverlight 4 – Productivity Power Tools and EF4 Ryan Alford found a very interesting bug associated with EF4 and the Productivity Power Tools, and the way to get out of it is just weird as well. Silverlight 4 – Toolkit and Theming Ryan Alford also had a problem adding a theme from the Toolkit, and what all you might have to do to get around this one.... Part 4 of 4 : Tips/Tricks for Silverlight Developers. Michael Crump has part 4 of his series on Silverlight Development tips and tricks. This is numbers 16 through 20 and covers topics such as Version information, Using Lambdas, Specifying a development port, Disabling ChildWindow Close button, and XAML cleanup. The XML content importer and Windows Phone 7 Peter Kuhn wanted to use the XML content inporter with a WP7 app and ran into problems implementing the process and a lack of documentation as well... he pounded through it all and has a class he's sharing for loading sounds via XML file settings. WP7 snippet: analyzing the hyperlink button style In a second post, Peter Kuhn responds to a forum discussion about the styles for the hyperlink button in WP7 and why they're different than SL4 ... and styles-to-go to get all the hyperlink goodness you want... wrapped text, or even non-text content. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • News you can use, PeopleTools gems at OpenWorld 2012

    - by PeopleTools Strategy
    Here are some of the sessions which may not have caught your eyes during your scheduling of events you would like to attend at this year's Open World! CON9183 PeopleSoft Technology Roadmap Jeff Robbins Mon, Oct 1 4:45 PM Moscone West, Room 3002/4 Jeff's session is always very well attended. Come to hear, and see, what's going to be delivered in the new release and get some thoughts on where PeopleTools and the industry is heading. CON9186 Delivering a Ground-Breaking User Interface with PeopleTools Matt Haavisto Steve Elcock Wed, Oct 3 3:30 PM Moscone West, Room 3009 This session will be wonderfully engaging for participants.  As part of our demonstration, audience members will be able to interact live and real-time with our demo using their smart phones and tablets as if you are users of the system. CON9188 A Great User Experience via PeopleSoft Applications Portal Matt Haavisto Jim Marion Pramod Agrawal Mon, Oct 1 12:15 PM Moscone West, Room 3009 This session covers not only the PeopleSoft Portal, but new features like Workcenters and Dashboards, and how they all work together to form the PeopleSoft ecosystem. CON9192 Implementing a PeopleSoft Maintenance Strategy with My Update Manager Mike Thompson Mike Krajicek Tue, Oct 2 1:15 PM Moscone West, Room 3009 The LCM development team will show Oracle's My Update Manager for PeopleSoft and how it drastically simplifies deciding what updates are required for your specific environment. CON9193 Understanding PeopleSoft Maintenance Tools & How They Fit Together Mike Krajicek Wed, Oct 3 10:15 AM Moscone West, Room 3002/4 Learn about the portfolio of maintenance tools including some of the latest enhancements such as Oracle's My Update Manager for PeopleSoft, Application Data Sets, and the PeopleSoft Test Framework, and see what they can do for you. CON9200 PeopleTools Product Team Panel Discussion Jeff Robbins Willie Suh Virad Gupta Ravi Shankar Mike Krajicek Wed, Oct 3 5:00 PM Moscone West, Room 3009 Attend this session to engage in an open discussion with key members of Oracle's PeopleTools senior management team. You will be able to ask questions, hear their thoughts, and gain their insight into the PeopleTools product direction. CON9205 Securing Your PeopleSoft Integration Infrastructure Greg Kelly Keith Collins Tue, Oct 2 10:15 AM Moscone West, Room 3011 This session, with the senior integration developer, will outline Oracle's best practices for securing your integration infrastructure so that you know your web services and REST services are as secure as the rest of your PeopleSoft environment. CON9210 Performance Tuning for the PeopleSoft Administrator Tim Bower David Kurtz Mon, Oct 1 10:45 AM Moscone West, Room 3009 Meet long time technical consultants with deep knowledge of system tuning, Tim Bower of the Center of Excellence and David Kurtz, author of "PeopleSoft for the Oracle DBA". System administrators new to tuning a PeopleSoft environment as well as seasoned experts will come away with new techniques that will help them improve the performance of their PeopleSoft system. CON9055 Advanced Management of Oracle PeopleSoft with Oracle Enterprise Manager Greg Kelly Milten Garia Greg Bouras Thurs Oct 4 12:45 PM Moscone West, Room 3009 This promises to be a really interesting session as Milten Garia from CSU discusses lessons learned during the implementation of Oracle's Enterprise Manager with the PeopleSoft plug-in across a multi campus environment. There are some surprising things about Solaris 10 and the Bourne shell. Some creative work by the Unix administrators so the well tried scripts and system replication processes were largely unaffected. CON8932 New Functional PeopleTools Capabilities for the Line of Business User Jeff Robbins Tues, Oct 2 5:00 PM Moscone West, Room 3007 Using PeopleTools 8.5x capabilities like: related content, embedded help, pivot grids, hover-over, and more, Jeff will discuss how these can deliver business value and innovation which will positively impact your business without the high costs associated with upgrading your PeopleSoft applications. Check out a more detailed list here. We look forward to meeting you all there!

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Search

    - by thatjeffsmith
    photo: Stuck in Customs via photopin cc The next version of Oracle SQL Developer Data Modeler is now available as an Early Adopter (read, beta) release. There are many new major feature enhancements to talk about, but today’s focus will be on the brand new Search mechanism. Data, data, data – SO MUCH data Google has made countless billions of dollars around a very efficient and intelligent search business. People have become accustomed to having their data accessible AND searchable. Data models can have thousands of entities or tables, each having dozens of attributes or columns. Imagine how hard it could be to find what you’re looking for here. This is the challenge we have tackled head-on in v3.3. Same location as the Search toolbar in Oracle SQL Developer (and most web browsers) Here’s how it works: Search as you type – wicked fast as the entire model is loaded into memory Supports regular expressions (regex) Results loaded to a new panel below Search across designs, models Search EVERYTHING, or filter by type Save your frequent searches Save your search results as a report Open common properties of object in search results and edit basic properties on-the-fly Want to just watch the video? We have a new Oracle Learning Library resource available now which introduces the new and improved Search mechanism in SQL Developer Data Modeler. Go watch the video and then come back. Some Screenshots This will be a pretty easy feature to pick up. Search is intuitive – we’ve already learned how to do search. Now we just have a better interface for it in SQL Developer Data Modeler. But just in case you need a couple of pointers… The SYS data dictionary in model form with Search Results If I type ‘translation’ in the search dialog, then the results will come up as hits are ‘resolved.’ By default, everything is searched, although I can filter the results after-the-fact. You can see where the search finds a match in the ‘Content’ column Save the Results as a Report If you limit the search results to a category and a model, then you can save the results as a report. All of the usual suspects You can optionally include the search string, which displays in the top of of the report as ‘PATTERN.’ You can save you common reporting setups as a template and reuse those as well. Here’s a sample HTML report: Yes, I like to search my search results report! Two More Ways to Search You can search ‘in context’ by opening the ‘Find’ dialog from an active design. You can do this using the ‘Search’ toolbar button or from a model context menu. Searching a specific model Instead of bringing up the old modal Find dialog, you now get to use the new and improved Search panel. Notice there’s no ‘Model’ drop-down to select and that the active Search form is now in the Search panel versus the search toolbar up top. What else is new in SQL Developer Data Modeler version 3.3? All kinds of goodies. You can send your model to Excel for quick edits/reviews and suck the changes back into your model, you can share objects between models, and much much more. You’ll find new videos and blog posts on the subject in the new few days and weeks. Enjoy! If you have any feedback or want to report bugs, please visit our forums.

    Read the article

  • Oracle and Partners release CAMP specification for PaaS Management

    - by macoracle
    Cloud Application Management for Platforms The public release of the Cloud Application Management for Platforms (CAMP) specification, an initial draft of what is expected to become an industry standard self service interface specification for Platform as a Service (PaaS) management, represents a significant milestone in cloud standards development. Created by several players in the emerging cloud industry, including Oracle, the specification is being submitted to the OASIS standards organization (draft charter) where it will be finalized in an open development process. CAMP is targeted at application developers and deployers for self service management of their application on a Platform-as-a-Service cloud. It is closely aligned with the application development process where applications are typically developed in an Application Development Environment (ADE) and then deployed into a private or public platform cloud. CAMP standardizes the model behind an application’s dependencies on platform components and provides a standardized format for moving applications between the ADE and the cloud, and if and when desirable, between clouds. Once an application is deployed, CAMP provides users with a standardized self service interface to the PaaS offering, allowing the cloud consumer to manage the lifecycle of the application on that platform and the use of the underlying platform services. The CAMP interface includes a RESTful binding of the CAMP model onto the standard HTTP protocol, using JSON as the encoding for the model resources. The model for CAMP includes resources that represent the Application, its Components and any Platform Components that they depend on. It's important PaaS Cloud consumers understand that for a PaaS cloud, these are the abstractions that the user would prefer to work with, not Virtual Machines and the various resources such as compute power, storage and networking. PaaS cloud consumers would also not like to become system administrators for the infrastructure that is hosting their applications and component services. CAMP works on this more abstract level, and yet still accommodates platforms that are built using an underlying infrastructure cloud. With CAMP, it is up to the cloud provider whether or not this underlying infrastructure is exposed to the consumer. One major challenge addressed by the CAMP specification is that of ensuring that application deployment on a new platform is as seamless and error free as possible. This becomes even more difficult when the application may have been developed for a different platform and is now moving to a new one. In CAMP this is accomplished by matching the requirements of the application and its components to the specific capabilities of the underlying platform. This needs to be done regardless of whether there are existing pools of virtualized platform resources (such as a database pool) which are provisioned(on the basis of a schema for example), or whether the platform component is really just a set of virtual machines drawn from an infrastructure pool. The interoperability between platform clouds that CAMP offers means that a CAMP client such as an ADE can target multiple clouds with a single common interface. Applications can even be spread across multiple platform clouds and then managed without needing to create a specialized adapter to manage the components running in each cloud. The development of CAMP has been an effort by a small set of companies, but there are significant advantages to this approach. For example, the way that each of these companies creates their platforms is different enough, to ensure that CAMP can cover a wide range of actual deployments. CAMP is now entering the next phase of development under the guidance of an open standards organization, OASIS, which will likely broaden it’s capabilities. We hope is to keep it concise and minimal, however, to ease implementation and adoption. Over time there will be many different types of platform components that applications can use and which need management. CAMP at this point only includes one example of this (in an appendix) – DataBase as a Service. I am looking forward to the start of the CAMP Technical Committee in OASIS and will do my best to ensure a successful development process. Hope to see you there.

    Read the article

  • Linux router: ping doesn't route back

    - by El Barto
    I have a Debian box which I'm trying to set up as a router and an Ubuntu box which I'm using as a client. My problem is that when the Ubuntu client tries to ping a server on the Internet, all the packets are lost (though, as you can see below, they seem to go to the server and back without problem). I'm doing this in the Ubuntu Box: # ping -I eth1 my.remote-server.com PING my.remote-server.com (X.X.X.X) from 10.1.1.12 eth1: 56(84) bytes of data. ^C --- my.remote-server.com ping statistics --- 13 packets transmitted, 0 received, 100% packet loss, time 12094ms (I changed the name and IP of the remote server for privacy). From the Debian Router I see this: # tcpdump -i eth1 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 7, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 8, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 8, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 9, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 9, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 10, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 10, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 11, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 11, length 64 ^C 9 packets captured 9 packets received by filter 0 packets dropped by kernel # tcpdump -i eth2 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 213, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 213, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 214, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 214, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 215, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 215, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 216, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 216, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 217, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 217, length 64 ^C 10 packets captured 10 packets received by filter 0 packets dropped by kernel And at the remote server I see this: # tcpdump -i eth0 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 1, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 1, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 2, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 2, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 3, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 3, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 4, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 4, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 5, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 5, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 6, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 6, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 7, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 7, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 8, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 8, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 9, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 9, length 64 18 packets captured 228 packets received by filter 92 packets dropped by kernel Here "X.X.X.X" is my remote server's IP and "Y.Y.Y.Y" is my local network's public IP. So, what I understand is that the ping packets are coming out of the Ubuntu box (10.1.1.12), to the router (10.1.1.1), from there to the next router (192.168.1.1) and reaching the remote server (X.X.X.X). Then they come back all the way to the Debian router, but they never reach the Ubuntu box back. What am I missing? Here's the Debian router setup: # ifconfig eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:105761 errors:0 dropped:0 overruns:0 frame:0 TX packets:48944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:40298768 (38.4 MiB) TX bytes:44831595 (42.7 MiB) Interrupt:19 Base address:0x6000 eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38335992 errors:0 dropped:0 overruns:0 frame:0 TX packets:37097705 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:4260680226 (3.9 GiB) TX bytes:3759806551 (3.5 GiB) Interrupt:27 eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:20 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3408 errors:0 dropped:0 overruns:0 frame:0 TX packets:3408 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:358445 (350.0 KiB) TX bytes:358445 (350.0 KiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:2767779 errors:0 dropped:0 overruns:0 frame:0 TX packets:1569477 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3609469393 (3.3 GiB) TX bytes:96113978 (91.6 MiB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2 # arp -n # Note: Here I have changed all the different MACs except the ones corresponding to the Ubuntu box (on 10.1.1.12 and 192.168.1.12) Address HWtype HWaddress Flags Mask Iface 192.168.1.118 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.72 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.94 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.102 ether NN:NN:NN:NN:NN:NN C eth2 10.1.1.12 ether 00:1e:67:15:2b:f0 C eth1 192.168.1.86 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.2 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.61 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.64 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.116 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.91 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.52 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.93 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.87 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.92 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.100 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.40 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.53 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.83 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.89 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.12 ether 00:1e:67:15:2b:f1 C eth2 192.168.1.77 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.66 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.90 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.65 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.41 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.78 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.123 ether NN:NN:NN:NN:NN:NN C eth2 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 10.1.1.0/24 !10.1.1.0/24 MASQUERADE all -- !10.1.1.0/24 10.1.1.0/24 Chain OUTPUT (policy ACCEPT) target prot opt source destination And here's the Ubuntu box: # ifconfig eth0 Link encap:Ethernet HWaddr 00:1e:67:15:2b:f1 inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe15:2bf1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:28785139 errors:0 dropped:0 overruns:0 frame:0 TX packets:19050735 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:32068182803 (32.0 GB) TX bytes:6061333280 (6.0 GB) Interrupt:16 Memory:b1a00000-b1a20000 eth1 Link encap:Ethernet HWaddr 00:1e:67:15:2b:f0 inet addr:10.1.1.12 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe15:2bf0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:285086 errors:0 dropped:0 overruns:0 frame:0 TX packets:12719 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:30817249 (30.8 MB) TX bytes:2153228 (2.1 MB) Interrupt:16 Memory:b1900000-b1920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:86048 errors:0 dropped:0 overruns:0 frame:0 TX packets:86048 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11426538 (11.4 MB) TX bytes:11426538 (11.4 MB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.1.1.1 0.0.0.0 UG 100 0 0 eth1 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.8.0.0 192.168.1.10 255.255.255.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 # arp -n # Note: Here I have changed all the different MACs except the ones corresponding to the Debian box (on 10.1.1.1 and 192.168.1.10) Address HWtype HWaddress Flags Mask Iface 192.168.1.70 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.90 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.97 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.103 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.13 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.120 (incomplete) eth0 192.168.1.111 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.118 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.51 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.102 (incomplete) eth0 192.168.1.64 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.52 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.74 (incomplete) eth0 192.168.1.94 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.121 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.72 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.87 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.91 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.71 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.78 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.83 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.88 (incomplete) eth0 192.168.1.82 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.98 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.100 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.93 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.73 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.11 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.85 (incomplete) eth0 192.168.1.112 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.89 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.65 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.81 ether NN:NN:NN:NN:NN:NN C eth0 10.1.1.1 ether 94:0c:6d:82:0d:98 C eth1 192.168.1.53 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.116 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.61 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.10 ether 6c:f0:49:a4:47:38 C eth0 192.168.1.86 (incomplete) eth0 192.168.1.119 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.66 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth1 192.168.1.92 ether NN:NN:NN:NN:NN:NN C eth0 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination Edit: Following Patrick's suggestion, I did a tcpdump con the Ubuntu box and I see this: # tcpdump -i eth1 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 1, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 1, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 2, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 2, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 3, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 3, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 4, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 4, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 5, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 5, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 6, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 6, length 64 ^C 12 packets captured 12 packets received by filter 0 packets dropped by kernel So the question is: if all packets seem to be coming and going, why does ping report 100% packet loss?

    Read the article

  • SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB

    - by Pinal Dave
    Let us take a look at the application you own at your business. If you pay attention to the underlying database for that application you will be amazed. Every successful business these days processes way more data than they used to process before. The number of transactions and the amount of data is growing at an exponential rate. Every single day there is way more data to process than before. Big data is no longer a concept; it is now turning into reality. If you look around there are so many different big data solutions and it can be a quite difficult task to figure out where to begin. Personally, I have been experimenting with a lot of different solutions which allow my database to scale immediately without much hassle while maintaining optimal database performance.  There are for sure some solutions out there, but for many I even have to learn their specific language and there is a lot of new exploration to do. Honestly, what I prefer is a product, which works with the language I know (SQL) and follows all the RDBMS concepts which I am familiar with (ACID etc.). NuoDB is one such solution.  It is an operational NewSQL database built on a patented emergent architecture with full support for SQL and ACID guarantees. In this blog post, I will explore how one can download and install NuoDB database. Step 1: Follow me and go to the NuoDB download page. Simply fill out the form, accept the online license agreement, and you will be taken directly to a page where you can select any platform you prefer to install NuoDB. In my example below, I select the Windows 64-bit platform as it is one of the most popular NuoDB platforms. (You can also run NuoDB on Amazon Web Services but I prefer to install it on my local machine for the purposes of this blog). Step 2: Once you have downloaded the NuoDB installer, double click on it to install it on the Windows platform. Here is the enlarged the icon of the installer. Step 3: Follow the wizard installation, as it is pretty straight forward and easy to do so. I have selected all the options to install as the overall installation is very simple and it does not take up much space. I have installed it on my C drive but you can select your preferred drive. It is quite possible that if you do not have 64 bit Java, it will throw following error. If you face following error, I suggest you to download 64-bit Java from here. Make sure that you download 64-bit Java from following link: http://java.com/en/download/manual.jsp If already have Java 64-bit installed, you can continue with the installation as described in following image. Otherwise, install Java and start from with Step 1. As in my case, I already have 64-bit Java installed – and you won’t believe me when I say that the entire installation of NuoDB only took me around 90 seconds. Click on Finish to end to exit the installation. Step 4: Once the installation is successful, NuoDB will automatically open the following two tabs – Console and DevCenter — in your preferred browser. On the Console tab you can explore various components of the NuoDB solution, e.g. QuickStart, Admin, Explorer, Storefront and Samples. We will see various components and their usage in future blog posts. If you follow these steps in this post, which I have followed to install NuoDB, you will agree that the installation of NuoDB is extremely smooth and it was indeed a pleasure to install a database product with such ease. If you have installed other database products in the past, you will absolutely agree with me. So download NuoDB and install it today, and in tomorrow’s blog post I will take the installation to the next level. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Page output caching for dynamic web applications

    - by Mike Ellis
    I am currently working on a web application where the user steps (forward or back) through a series of pages with "Next" and "Previous" buttons, entering data until they reach a page with the "Finish" button. Until finished, all data is stored in Session state, then sent to the mainframe database via web services at the end of the process. Some of the pages display data from previous pages in order to collect additional information. These pages can never be cached because they are different for every user. For pages that don't display this dynamic data, they can be cached, but only the first time they load. After that, the data that was previously entered needs to be displayed. This requires Page_Load to fire, which means the page can't be cached at that point. A couple of weeks ago, I knew almost nothing about implementing page caching. Now I still don't know much, but I know a little bit, and here is the solution that I developed with the help of others on my team and a lot of reading and trial-and-error. We have a base page class defined from which all pages inherit. In this class I have defined a method that sets the caching settings programmatically. For pages that can be cached, they call this base page method in their Page_Load event within a if(!IsPostBack) block, which ensures that only the page itself gets cached, not the data on the page. if(!IsPostBack) {     ...     SetCacheSettings();     ... } protected void SetCacheSettings() {     Response.Cache.AddValidationCallback(new HttpCacheValidateHandler(Validate), null);     Response.Cache.SetExpires(DateTime.Now.AddHours(1));     Response.Cache.SetSlidingExpiration(true);     Response.Cache.SetValidUntilExpires(true);     Response.Cache.SetCacheability(HttpCacheability.ServerAndNoCache); } The AddValidationCallback sets up an HttpCacheValidateHandler method called Validate which runs logic when a cached page is requested. The Validate method signature is standard for this method type. public static void Validate(HttpContext context, Object data, ref HttpValidationStatus status) {     string visited = context.Request.QueryString["v"];     if (visited != null && "1".Equals(visited))     {         status = HttpValidationStatus.IgnoreThisRequest; //force a page load     }     else     {         status = HttpValidationStatus.Valid; //load from cache     } } I am using the HttpValidationStatus values IgnoreThisRequest or Valid which forces the Page_Load event method to run or allows the page to load from cache, respectively. Which one is set depends on the value in the querystring. The value in the querystring is set up on each page in the "Next" and "Previous" button click event methods based on whether the page that the button click is taking the user to has any data on it or not. bool hasData = HasPageBeenVisited(url); if (hasData) {     url += VISITED; } Response.Redirect(url); The HasPageBeenVisited method determines whether the destination page has any data on it by checking one of its required data fields. (I won't include it here because it is very system-dependent.) VISITED is a string constant containing "?v=1" and gets appended to the url if the destination page has been visited. The reason this logic is within the "Next" and "Previous" button click event methods is because 1) the Validate method is static which doesn't allow it to access non-static data such as the data fields for a particular page, and 2) at the time at which the Validate method runs, either the data has not yet been deserialized from Session state or is not available (different AppDomain?) because anytime I accessed the Session state information from the Validate method, it was always empty.

    Read the article

  • ArchBeat Top 10 for December 2-8, 2012

    - by Bob Rhubart
    The Top 10 most-clicked items shared on the OTN ArchBeat Facebook page for the week of December 2-8, 2012 Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics Another of the four posts published on Dec 4 by the Fusion Middleware A-Team blogger identified as "fip" illlustrates "how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication." Web Service Example - Part 3: Asynchronous Part 3 in this series from the Oracle ADF Mobile blog looks at "firing the web service asynchronously and then filling in the UI when it completes." Denis says, "This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data." Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations Oracle SOA & BPM Partner Community blogger Juergen Kress shares a list of 13 SOA presentations delivered or moderated by Oracle SOA Product Management at OOW12 in San Francisco. Oracle WebLogic Server WLS Domain Browser My colleague Jeff Davies, a frequent speaker at OTN Architect Day events and a genuinely nice guy, emailed me last night with this message: "I just came across this app on Google Play. It allows WebLogic administrators to browse WLS 12c domain information. I installed it on my phone and tried it out. Works very fast." I'm an iPhone guy, but I'm perfectly comfortable taking Jeff at his word. The app is called WLS Domain Browser. Follow the link for more info from the Google Play site. Retrieve Performance Data from SOA Infrastructure Database Another of the four blog posts published on Dec 4 by very busy Oracle Fusion Middleware A-Team member "fip," this one offers "examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time." How to Achieve OC4J RMI Load Balancing "Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusion in the field [about] how OC4J RMI load balancing works," says the Oracle Fusion Middleware A-Team member known only as "fip." "Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public." From XaaS to Java EE – Which damn cloud is right for me in 2012? Oracle ACE Director Markus Eisele wrestles with a timely technical issue and shares his observations on several of the alternatives. Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox "One of the main advantages of this is that Templates can be created away from the Exalogic Environment," explains The Old Toxophilist. (BTW: I had to look it up: a toxophilist is one who collects bows and arrows.) ADF Mobile - Implementing Reusable Mobile Architecture "Reusability was always a strong part of ADF," says Oracle ACE Director Andrejus Baranovskis. "The same high reusability level is supported now in ADF Mobile." The objective of this post is "to prove technically that [the] reusable architecture concept works for ADF Mobile." Using BPEL Performance Statistics to Diagnose Performance Bottlenecks Someone had a busy day… This post, one of four published on DeC 4 by a member of the Oracle Fusion Middleware A-Team identified only as "fip," offers details on how to "enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience." Thought for the Day "If you're afraid to change something it is clearly poorly designed." — Martin Fowler Source: SoftwareQuotes.com

    Read the article

  • Why SQL Developer Rocks for the Advanced User Too

    - by thatjeffsmith
    While SQL Developer may be ‘perfect for Oracle beginners,’ that doesn’t preclude advanced and intermediate users from getting their fair share of toys! I’ve been working with Oracle since the 7.3.4 days, and I think it’s pretty safe to say that the WAY an ‘old timer’ uses a tool like SQL Developer is radically different than the ‘beginner.’ If you’ve been reluctant to use SQL Developer because it’s a GUI, give me a few minutes to try to convince you it’s worth a second (or third) look. 1. Help when you want it, and only when you want it One of the biggest gripes any user has with a piece of software is when said software can’t get out of it’s own way. When you’re typing in a word processor, sometimes you can do without the grammar and spelling checks, the offer to auto-complete your words, and all of the additional mark-up. This drives folks to programs like Notepad++ and vi. You can disable the code insight feature so you can type unmolested by SQL Developer’s attempt to auto-complete your object names. Now, if you happen to come across a long or hard to spell object name, you can still invoke the feature on demand using Ctrl+Spacebar Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) 2. Automatic File Tracking SQL*Minus is nice. Vi is cool. Notepad++ has a lot of features I like. But not too many editors offer automatic logging of changes to your files without having to setup a source control system. I was doing some work on my login.sql. I’m not doing anything crazy, but seeing what I had done in previous iterations was helpful. Now imagine how nice it would be to have this available for your l,000+ line scripts! Track your scripts as they change, no setup required! 3. Extend the Functionality Know SQL and XML? Wish SQL Developer did JUST a little bit more? Build your own extensions. You can have custom context menus and object pages in just a few minutes. This is an example of lazy developers writing code that write code. 4. Get Your Money’s Worth You’ve licensed Enterprise Edition. You got your Diagnostic and Tuning packs. Now start using them! Not everyone has access to Enterprise Manager, especially developers. But that doesn’t mean they don’t need help with troubleshooting and optimizing poorly performing SQL statements. ASH, AWR, Real-Time SQL Monitoring and the SQL Tuning Advisor are built into the Reports and Worksheet. Yes you could make the package calls, but that’s a whole lot of typing, and I’d rather just get to the results. 5. Profile, Debug, & Unit Testing PLSQL An Interactive Development Environment (IDE) built by the same folks that own the programming language (Hello – Oracle PLSQL!) should be complete. It should ‘hug’ the developer and empower them to churn out programs that work, run fast, and are easy to maintain. Write it, test it, debug it, and tune it. When you’re running your programs and you just want to see the data that’s returned, that shouldn’t require any special settings or workaround to make it happen either. Magic! And a whole lot more… I could go on and talk about the support for things like DataPump, RMAN, and DBMS_SCHEDULER, but you’re experts and you’re plenty busy. If you think SQL Developer is falling short somewhere, I want you to let us know about it.

    Read the article

  • Agile Manifesto, Revisited

    - by GeekAgilistMercenary
    Again, conversations give me a zillion things to write about.  The recent conversation that has cropped up again is my various viewpoints of the Agile Manifesto.  Not all the processes that came after the manifesto was written, but just the core manifesto itself.  Just for context, here is the manifesto in all the glory. We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. Several of the key signatories at the time went on to write some of the core books that really gave Agile Software Development traction.  If you check out the Agile Manifesto Site and do a search for any of those people, you will find a treasure trove of software development information. My 2 Cents First off, I agree with a few people out there.  Agile is not Scrum for instance.  Do NOT get these things confused when checking out Agile, or pushing forward with Scrum.  As David Starr points out in his blog entry, "About 35 minutes into this discussion, I realized I hadn?t heard a question or comment that wasn?t related to Scrum. I asked the room, ?How many people are on an agile team that is NOT using Scrum?? 5 hands. Seriously, out of about 150 people of so. 5 hands." So know, as this is one of my biggest pet peves these days, that Scrum is not Agile.  Another quote David writes, "I assure you, dear reader, 2 week time boxes does not an agile team make." This is the exact problem.  Take a look at the actual manifesto above.  First ideal, "Individuals and interactions over processes and tools".  There are a couple of meanings in this ideal, just as there are in the other written ideals.  But this one has a lot of contention with a set practice such as Scrum.  There are other formulas, namely XP (eXtreme) and Kanban are two that come to mind often.  But none of these are Agile, but instead a process based on the ideals of Agile. Some of you may be thinking, "that?s the same thing".  Well, no, it is not.  This type of differentiation is vitally important.  Agile is a set of ideals.  Processes are nice, but they can change, they may work for some and not others.  The Agile Manifesto covers the ideals behind what is intended, that intention being to learn and find new ways to build better software. Ideals, not processes.  Definition versus implementation.  Class versus object.  The ideals are of utmost importance, the processes are secondary, the first ideal is what really lays this out for me "Individuals and interactions over processes and tools".  Yes, we need tools but we need the individuals and their interactions more. For those coming into a development team, I hope you take this to mind.  It is of utmost importance that this differentiation is known and fought for.  The second the process becomes more important than the individuals and interactions, the team will effectively lose the advantages of Agile Ideals. This is just one of my first thoughts on the topic of Agile.  I will be writing more in the near future about each of the ideals.  I will make a point to outline more of my thoughts, my opinions, and experience with the ideals of Agile and the various processes that are out there.  Maybe, I may stumble upon something new with the help of my readers?  It would be a grand overture to the ideals I hold. For the original entry, check out my personal blog with other juicy tech tidbits, rants, raves, and the like. Agilist Mercenary

    Read the article

  • Java in Flux: Utopia or Deuteranopia?

    - by Tori Wieldt
    What a difference a year makes, indeed. Steve Harris, Senior VP, App Server Dev, Oracle and Adam Messinger, VP, Fusion Middleware Group, Oracle presented an informative keynote at the TheServerSide Java Symposium today. With a title "Java in Flux: Utopia or Deuteranopia?" you know things are going to be interesting (see Aeon Flux if you don't get the title reference).What a YearThey started with a little background, explaining that the reactions to Oracle's acquisition of Sun (and therefore Java) one year ago varied greatly, from "Freak Out!" to "Don't Panic." From the Oracle perspective, being the steward of and key contributor to Java requires a lot of sausage making.  They admitted to Oracle's fair share of Homer Simpson-esque "D'oh" moments in the past year, which was complicated by Oracle's communication style.   "Oracle has a tradition has a saying a few things and sticking by then, in contrast to Sun who was much more open," Adam explained. "We laid out the Java roadmap and are executing on it, and we hope that speaks to our commitment."Java SEAdam talked about having a long term perspective on the Java language (20+ years), letting ideas mature in more experimental languages, then bringing them into Java. Current priorities include: JVM convergence (getting the best features of JRockit into Hotspot); support of parallel/multi-core programming, and of course, all the improvements in JDK7. The JDK7 Developer Preview is underway (please download now and report bugs!). The Oracle development team is also working on Lambda and modularity (Jigsaw) for SE 8. Less certain, but also under discussion are improvements for Java SE 9. Adam is thinking of it as a "back to basics" release. He mentioned reworking JNI, improving data integration and improved device support.Java EE To provide context about Java EE, Steve said Java EE was great at getting businesses on the internet. The success of Java EE resulted in an incredible expansion of the middleware marketplace for developers and vendors.  But with success, came more. Java EE kept piling on capabilities, but that created excess baggage.  Doing simple things was no longer so simple. That's where Java community is so valuable: "When Java EE was too complex and heavyweight, many people were happy to tell us what we were doing wrong and popularize solutions," Steve explained. Because of that feedback, the Java EE teams focused on making things simple again: POJOs and annotations, and leveraging changes in Java SE.  Steve said that "innovation doesn't happen in expert groups, it happens on the ground where developers are solving problems," and platform stewards need to pay attention and take advantage of changes that are taking place.Enter the Cloud "Developers are restless, they want cloud functionality from their own IT dept" Steve explained. With the cloud, the scope of problem has expanded to include the data center itself, with multiple tenants. To move forward, existing APIs in Java EE need to be updated to be tenant-aware, service-enabled, and EE needs to support various styles of deployment. The goal is to get all that done in Java EE 8.Adam questioned Steve about timing and schedule. "Yes, the schedule is aggressive, but it'll work" Steve said. Then Adam asked about modularization. If Java SE 8 comes out at the end of 2012, when can Java EE deliver modularization? Steve suggested that key stakeholders can come with up some pre-SE 8 agreement on how to expose the metadata about modules. He then alluded to Mark Reinhold and John Duimovich's keynote at EclipseCON next week. Stay tuned.Evil Master PlanIn conclusion, Adam finally admitted to Oracle's Evil Master Plan: 1) Invest in and improve Java SE and EE 2) Collaborate with the community 3) Broaden the marketplace for Java development. Bwaaaaaaaaahahaha! <rubs hands together>Key LinksJDK7 Developer Preview  http://jdk7.java.net/preview/Oracle Technology Network http://www.oracle.com/technetwork/java/index.htmlTheServerSide Java Symposium  http://javasymposium.techtarget.com/"Utopia or Deuteranopia?" http://en.wikipedia.org/wiki/Aeon_Flux

    Read the article

  • C# 4 Named Parameters for Overload Resolution

    - by Steve Michelotti
    C# 4 is getting a new feature called named parameters. Although this is a stand-alone feature, it is often used in conjunction with optional parameters. Last week when I was giving a presentation on C# 4, I got a question on a scenario regarding overload resolution that I had not considered before which yielded interesting results. Before I describe the scenario, a little background first. Named parameters is a well documented feature that works like this: suppose you have a method defined like this: 1: void DoWork(int num, string message = "Hello") 2: { 3: Console.WriteLine("Inside DoWork() - num: {0}, message: {1}", num, message); 4: } This enables you to call the method with any of these: 1: DoWork(21); 2: DoWork(num: 21); 3: DoWork(21, "abc"); 4: DoWork(num: 21, message: "abc"); and the corresponding results will be: Inside DoWork() - num: 21, message: Hello Inside DoWork() - num: 21, message: Hello Inside DoWork() - num: 21, message: abc Inside DoWork() - num: 21, message: abc This is all pretty straight forward and well-documented. What is slightly more interesting is how resolution is handled with method overloads. Suppose we had a second overload for DoWork() that looked like this: 1: void DoWork(object num) 2: { 3: Console.WriteLine("Inside second overload: " + num); 4: } The first rule applied for method overload resolution in this case is that it looks for the most strongly-type match first.  Hence, since the second overload has System.Object as the parameter rather than Int32, this second overload will never be called for any of the 4 method calls above.  But suppose the method overload looked like this: 1: void DoWork(int num) 2: { 3: Console.WriteLine("Inside second overload: " + num); 4: } In this case, both overloads have the first parameter as Int32 so they both fulfill the first rule equally.  In this case the overload with the optional parameters will be ignored if the parameters are not specified. Therefore, the same 4 method calls from above would result in: Inside second overload: 21 Inside second overload: 21 Inside DoWork() - num: 21, message: abc Inside DoWork() - num: 21, message: abc Even all this is pretty well documented. However, we can now consider the very interesting scenario I was presented with. The question was what happens if you change the parameter name in one of the overloads.  For example, what happens if you change the parameter *name* for the second overload like this: 1: void DoWork(int num2) 2: { 3: Console.WriteLine("Inside second overload: " + num2); 4: } In this case, the first 2 method calls will yield *different* results: 1: DoWork(21); 2: DoWork(num: 21); results in: Inside second overload: 21 Inside DoWork() - num: 21, message: Hello We know the first method call will go to the second overload because of normal method overload resolution rules which ignore the optional parameters.  But for the second call, even though all the same rules apply, the compiler will allow you to specify a named parameter which, in effect, overrides the typical rules and directs the call to the first overload. Keep in mind this would only work if the method overloads had different parameter names for the same types (which in itself is weird). But it is a situation I had not considered before and it is one in which you should be aware of the rules that the C# 4 compiler applies.

    Read the article

  • Java EE 6 Pocket Guide from O'Reilly - Now Available in Paperback and Kindle Edition

    - by arungupta
    Hot off the press ... Java EE 6 Pocket Guide from 'OReilly Media is now available in Paperback and Kindle Edition. Here are the book details: Release Date: Sep 21, 2012 Language: English Pages: 208 Print ISBN: 978-1-4493-3668-4 | ISBN 10:1-4493-3668-X Ebook ISBN:978-1-4493-3667-7 | ISBN 10:1-4493-3667-1 The book provides a comprehensive summary of the Java EE 6 platform. Main features of different technologies from the platform are explained and accompanied by tons of samples. A chapter is dedicated to Managed Beans, Servlets, Java Persistence API, Enterprise JavaBeans, Contexts and Dependency Injection, JavaServer Faces, SOAP-Based Web Services, RESTful Web Services, Java Message Service, and Bean Validation in that format. Many thanks to Markus Eisele, John Yeary, and Bert Ertman for reviewing and providing valuable comments. This book was not possible without their extensive feedback! This book was mostly written by compiling my blogs, material from 2-day workshops, and several hands-on workshops around the world. The interactions with users of different technologies and whiteboard discussions with different specification leads helped me understand the technology better. Many thanks to them for helping me be a better user! The long international flights during my travel around the world proved extremely useful for authoring the content. No phone, no email, no IM, food served on the table, power outlet = a perfect recipe for authoring ;-) Markus wrote a detailed review of the book. He was one of the manuscript reviewers of the book as well and provided valuable guidance. Some excerpts from his blog: It covers the basics you need to know of Java EE 6 and gives good examples of all relevant parts. ... This is a pocket guide which is comprehensively written. I could follow all examples and it was a good read overall. No complicated constructs and clear writing. ... GO GET IT! It is the only book you probably will need about Java EE 6! It is comprehensive, wonderfully written and covers everything you need in your daily work. It is not a complete reference but provides a great shortcut to the things you need to know. To me it is a good beginners guide and also works as a companion for advanced users. Here is the first tweet feedback ... Jeff West was super prompt to place the first pre-order of my book, pretty much the hour it was announced. Thank you Jeff! @mike_neck posted the very first tweet about the book, thanks for that! The book is now available in Paperback and Kindle Edition from the following websites: O'Reilly Media (Ebook, Print & Ebook, Print) Amazon.com (Kindle Edition and Paperback) Barnes and Noble Overstock (1% off Amazon) Buy.com Booktopia.com Tower Books Angus & Robertson Shopping.com Here is how I can use your help: Help spread the word about the book If you bought a Paperback or downloaded Kindle Edition, then post your review here. If you have not bought, then you can buy it at amazon.com and multiple other websites mentioned above. If you are coming to JavaOne, you'll have an opportunity to get a free copy at O'Reilly's booth on Monday (October 1) from 2-3pm. And you can always buy it from the JavaOne Bookstore. I hope you enjoy reading it and learn something new from it or hone your existing skills. As always, looking forward to your feedback!

    Read the article

  • Smooth Sailing or Rough Waters: Navigating Policy Administration Modernization

    - by helen.pitts(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Life insurance and annuity carriers continue to recognize the need to modernize their aging policy administration systems, but may be hesitant to move forward because of the inherent risk involved. To help carriers better prepare for what lies ahead LOMA's Resource Magazine asked Karen Furtado, partner of Strategy Meets Action, to help them chart a course in Navigating Policy Administration Selection, the cover story of this month’s issue. The industry analyst and research firm recently asked insurance carriers to name the business drivers for replacing legacy policy administration systems. The top five cited, according to Furtado, centered on: Supporting growth in current lines Improving competitive position Containing and reducing costs Supporting growth in new lines Supporting agent demands and interaction It’s no surprise that fueling growth, both now and in the future, continues to be a key driver for modernization. Why? Inflexible, hard-coded, legacy systems require customization by IT every time a change is required. This in turn impedes a carrier’s ability to be agile, constraining their ability to quickly adapt to changing regulatory requirements and evolving market demands. It also stymies their ability to quickly bring to market new products or rapidly configure changes to existing ones, and also can inhibit how carriers service customers and distribution channels. In the article, Furtado advised carriers to ensure that the policy administration system they are considering is current and modern, with an adaptable user interface and flexible service-oriented architecture. She said carriers to should ask themselves, “How much do you need flexibility and agility now and in the future? Does it support the business processes and rules that are needed for you to be able to create that adaptable environment?” Furtado went on to advise that carriers “Connect your strategy to your business and technical capabilities before you make investment choices…You want to enable your organization to transform for the future, not just automate the past.” Unlocking High Performance with Policy Administration Transformation also was the topic of a recent LOMA webcast moderated by Ron Clark, editor of LOMA's Resource Magazine. The web cast, which featured speakers from Oracle Insurance and Capgemini, focused on how insurers can competitively drive high performance by: Replacing a legacy policy administration system with a modern, flexible platform Optimizing IT and operations costs, creating consistent processes and eliminating resource redundancies Selecting the right partner with the best blend of technology, operational, and consulting capabilities to achieve market leadership Understanding the value of outsourcing closed block operations Learn more by clicking here to access this free, one-hour recorded webcast. Helen Pitts, is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >