Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 317/679 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • 12c??? - Active Data Guard Far Sync

    - by Jian Zhang(??)
    ?? ================ Active Data Guard Far Sync?Oracle 12c????(???Far Sync Standby),Far Sync?????????????(Primary Database)?????????Far Sync??,??(Primary Database) ??(synchronous)??redo?Far Sync??,??Far Sync????redo??(asynchronous)???????(Standby Database)???????????????????????Far Sync????????,init?????????,???????? ??redo ????Maximum Availability??,???????????(Primary Database)?????????Far Sync??,??(Primary Database)??(synchronous)??redo?Far Sync??,???????(zero data loss),?????Far Sync????,??????,??????????????Far Sync????redo??(asynchronous)???????(Standby Database)? ??redo ????Maximum Performance??,???????????(Primary Database)?????????Far Sync??,??(Primary Database) ????redo?Far Sync??,??Far Sync???????redo?????????(Standby Database)????????????????(Standby Database)??redo???(offload)? Far Sync????Data Guard ????(role transitions)????,?switchover/failover?????12c????? ???????Data Guard ????,?switchover/failover,???????????????Far Sync??,??Far Sync???????????????????? ???Far Sync???????,??????????????2?Far Sync??,???????? ???????Far Sync????? Far Sync??? ================ ????Far Sync ================ 1. ??Data Guard,???11.2??,??????«Active Database Duplication for A standby database» 2. ????Far Sync??,Far Sync????????,init?????????,???????? ??Far Sync???????,?????: SQL> ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/tmp/controlfs01.ctl'; 3. ????redo?????Far Sync??,????LOG_ARCHIVE_DEST_2??: LOG_ARCHIVE_DEST_2='SERVICE=dg12cfs SYNC AFFIRM MAX_FAILURE=1 ALTERNATE=LOG_ARCHIVE_DEST_3 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=dg12cfs' 4. ??Far Sync??????redo???,??Far Sync??LOG_ARCHIVE_DEST_2??: LOG_ARCHIVE_DEST_2='SERVICE=dg12cs ASYNC VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=dg12cs' 5. ????Far Sync???????,??????????????2?Far Sync??? 6. ???????: SQL> select * from  V$DATAGUARD_CONFIG; DB_UNIQUE_NAME       PARENT_DBUN       DEST_ROLE         CURRENT_SCN     CON_ID ------------------------------ ------------------------------     ----------------- ----------- ---------- dg12cfs                        dg12cp          FAR SYNC INSTANCE      682995          0 dg12cs                         dg12cfs         PHYSICAL STANDBY       682995          0 dg12cp                        NONE             PRIMARY DATABASE      683138          0 ????????????????:Oracle_12c_Active_Data_Guard_Far_Sync_v1.pdf

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • Azure Service Bus - Authorization failure

    - by Michael Stephenson
    I fell into this trap earlier in the week with a mistake I made when configuring a service to send and listen on the azure service bus and I thought it would be worth a little note for future reference as I didnt find anything online about it.  After configuring everything when I ran my code sample I was getting the below error. WebHost failed to process a request.Sender Information: System.ServiceModel.ServiceHostingEnvironment+HostingManager/28316044Exception: System.ServiceModel.ServiceActivationException: The service '/-------/BrokeredMessageService.svc' cannot be activated due to an exception during compilation.  The exception message is: Generic: There was an authorization failure. Make sure you have specified the correct SharedSecret, SimpleWebToken or Saml transport client credentials.. ---> Microsoft.ServiceBus.AuthorizationFailedException: Generic: There was an authorization failure. Make sure you have specified the correct SharedSecret, SimpleWebToken or Saml transport client credentials.   at Microsoft.ServiceBus.RelayedOnewayTcpClient.ConnectRequestReplyContext.Send(Message message, TimeSpan timeout, IDuplexChannel& channel)   at Microsoft.ServiceBus.RelayedOnewayTcpListener.RelayedOnewayTcpListenerClient.Connect(TimeSpan timeout)   at Microsoft.ServiceBus.RelayedOnewayTcpClient.EnsureConnected(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.CommunicationObject.Open(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.RefcountedCommunicationObject.Open(TimeSpan timeout)   at Microsoft.ServiceBus.RelayedOnewayChannelListener.OnOpen(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.CommunicationObject.Open(TimeSpan timeout)   at System.ServiceModel.Dispatcher.ChannelDispatcher.OnOpen(TimeSpan timeout)   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)   at System.ServiceModel.ServiceHostBase.OnOpen(TimeSpan timeout)   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)   at Microsoft.ServiceBus.SocketConnectionTransportManager.OnOpen(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.TransportManager.Open(TimeSpan timeout, TransportChannelListener channelListener)   at Microsoft.ServiceBus.Channels.TransportManagerContainer.Open(TimeSpan timeout, SelectTransportManagersCallback selectTransportManagerCallback)   at Microsoft.ServiceBus.SocketConnectionChannelListener`2.OnOpen(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.CommunicationObject.Open(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.CommunicationObject.Open(TimeSpan timeout)   at System.ServiceModel.Dispatcher.ChannelDispatcher.OnOpen(TimeSpan timeout)   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)   at System.ServiceModel.ServiceHostBase.OnOpen(TimeSpan timeout)   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)   at System.ServiceModel.ServiceHostingEnvironment.HostingManager.ActivateService(String normalizedVirtualPath)   at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath)   --- End of inner exception stack trace ---   at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath)   at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath)Process Name: w3wpProcess ID: 8056As recommended by the error message I checked everything about the application configuration and also the keys and eventually I found the problem.When I set the permissions in the ACS rule group I had copied and pasted the claim name for net.windows.servicebus.action from the Azure portal and hadnt spotted the <space> character on the end of it like you sometimes pick up when copying text in the browser.  This meant that the listen and send permissions were not setup correctly which is why (as you would expect) my two applications could not connect to the service bus.So lesson learnt here, if you do copy and paste into the ACS rules just be careful you dont leave a space on the end of anything otherwise it will be difficult to spot that its configured incorrectly

    Read the article

  • links for 2011-01-04

    - by Bob Rhubart
    Webcasts (tags: ping.fm) Five Key Trends in Enterprise 2.0 for 2011 (Oracle Enterprise 2.0 Blog) Kellsey Ruppel shares insight from Oracle's Andy MacMillan. (tags: oracle otn enterprise2.0) Victor Bax: Lost in Service Oriented Architecture? "SOA is a concept, no more, no less. SOA is not a technology, or a piece of software. It is an architecture, a model." - Victor Bax (tags: oracle soa) Jan-Leendert: Oracle 11g SOA Suite read multi record data from csv file with the file adapter (master-detail) "The file adapter is a very powerlful tool to read files with structured data. Most of the time you will read simple csv files with one record per row. But what if your csv file contains multiple records with different types?" - Jan-Leendert (tags: oracle soa soasuite) @myfear: Five ways to know how your data looked in the past. Entity Auditing. "Whatever requirements you have. I can promise you, that it will never be a simple solution. In general it's best to evaluate your purpose for auditing in detail." - Oracle ACE Director Markus Eisele (tags: oracle otn oracleace java) @fteter: Buffing Up The Crystal Ball "While I'm already tired of seeing these types of posts (I'm writing on New Year's Day), I'm also feeling guilty about not making my own set of predictions." - Oracle ACE Director Floyd Teter (tags: oracle otn oracleace ec2 cloud fusionmiddleware) @bex: ECM New Year's Resolutions "Happy new year! Most people use the first post of the year to go over their own blog statistics of popular posts... but since my blog's fiscal year ends in April, I decided to do new years resolutions instead." - Oracle ACE Director Bex Huff (tags: oracle otn oracleace ecm enterprise2.0) Izaak de Hullu: Embedded Java in a 11g BPEL process "In an earlier blog my colleague Peter Ebell explained how you can create an extension of com.collaxa.cube.engine.ext.BPELXExecLet to do your coding in a regular Java environment so you have code completion and validation..." - Izaak de Hullu (tags: oracle otn bpel java soa) @gschmutz: Cannot access EM console after installing SOA Suite 11g PS2 Oracle ACE Director Guido Schmutz encounters a problem and shares the solution. (tags: oracle otn oracleace soa soasuite)

    Read the article

  • Hill International Wins Oracle Eco-Enterprise Innovation Award

    - by Evelyn Neumayr
    In my last blog entry, I discussed Oracle’s Eco-Enterprise Innovation Award, part of the Oracle Excellence awards. Nominations for this year’s awards are due July 17. These awards are presented to organizations that use Oracle products to reduce their environmental footprint while improving their operational efficiency. One of last year’s winners was Hill International. Engineering News-Record magazine recently ranked Hill as the eighth-largest construction management firm in the United States. Hill International was able to streamline its forecasting and improve its visibility into its construction projects’ productivity and profitability using Oracle Primavera. They also implemented Oracle Hyperion Financial Management to standardize its financial reporting and forecasting processes and support its decision-making. With Oracle, Hill gained visibility into the true productivity of each project and cut its financial reporting cycle time from two weeks to one. The company also used the data generated to support new construction project proposals and determine the profitability of potential projects. Hill International realized significant cost savings and reduced its environmental impact on its US$400 million Comcast Center construction project in Philadelphia by centralizing its data storage, reducing paper usage, and maximizing project efficiency. It also leveraged the increased visibility offered by the Oracle solutions to make more environmentally-sound business decisions regarding on-site demolition, re-use of previous structures, green design of new facilities, procurement, and materials usage. See more about Hill International and the other Eco-Enterprise Innovation award winners here.  

    Read the article

  • Force Blank TextBox with ASP.Net MVC Html.TextBox

    - by Doug Lampe
    I recently ran into a problem with the following scenario: I have data with a parent/child data with a one-to-many relationship from the parent to the child. I want to be able to update parent and existing child data AND add a new child record all in a single post. I don't want to create a model just to store the new values. One of the things I LOVE about MVC is how flexible it is in dealing with posted data.  If you have data that isn't in your model, you can simply use the non-strongly-typed HTML helper extensions and pass the data into your actions as parameters or use the FormCollection.  I thought this would give me the solution I was looking for.  I simply used Html.TextBox("NewChildKey") and Html.TextBox("NewChildValue") and added parameters to my action to take the new values.  So here is what my action looked like: [HttpPost] public ActionResult EditParent(int? id, string newChildKey, string newChildValue, FormCollection forms) {     Model model = ModelDataHelper.GetModel(id ?? 0);     if (model != null)     {         if (TryUpdateModel(model))         {             if (ModelState.IsValid)             {                 model = ModelDataHelper.UpdateModel(model);             }             string[] keys = forms.GetValues("ChildKey");             string[] values = forms.GetValues("ChildValue");             ModelDataHelper.UpdateChildData(id ?? 0, keys, values);             ModelDataHelper.AddChildData(id ?? 0, newChildKey, newChildValue);             model = ModelDataHelper.GetModel(id ?? 0);         }        return View(report);     }    return new EmptyResult(); } The only problem with this is that MVC is TOO smart.  Even though I am not using a model to store the new child values, MVC still passes the values back to the text boxes via the model state.  The fix for this is simple but not necessarily obvious, simply remove the data from the model state before returning the view: ModelState.Remove("NewChildKey"); ModelState.Remove("NewChildValue"); Two lines of code to save a lot of headaches.

    Read the article

  • Feeling Old? Before Middleware, Gamification, and MacBook Airs

    - by ultan o'broin
    Think we're done with green screens in the enterprise apps world? Fusion User Experience Advocate Debra Lilley (@debralilley) drew my attention to this super retro iPad terminal emulator app being used by a colleague to connect to JDE. Yes, before Middleware, this is how you did it. Surely the ultimate in hipster retro coexistence? Mind you, I've had to explain to lots of people I showed this to just what Telnet and IBM AS/400 are (or were). MochaSoft TN5250 Terminal Emulator iPad App This OG way of connecting to apps is a timely reminder not to forget all those legacy apps out there and the UX aspect to adoption and change. If a solution already works well and there's an emotional attachment to it, then the path to upgrade needs to be very clear and have valuable and demonstrable ROI for users and decision makers, a path that spans emotion and business benefits. On a pure usability front, that old school charm of the character-based green glow look 'n' feel could be easily done as a skin, personalizing an application for the user so that they feel comfortable with it. Fun too particularly in the mobile and BYOD space! In fact, there is a thriving retro apps market out there as illustrated by this spiffy lunar lander app (hat tip: John Cartan), part of a whole set of Atari's greatest hits available for iOS. Lunar Lander App And of course, there's the iOS version of Pong. Check out this retro Apple Mac SE/30 too. I actually remember using one of these. I have an Apple Mac Plus somewhere in my parents' house. I tried it out recently, and it actually booted, although all it was good for was playing the onboard games. Looking at all these olde worlde things makes me feel very old, but kinda warm inside too. The latter is a key part of today's applications user experience too.

    Read the article

  • Oracle Global HR Cloud Implementation Training Can Help Meet Your Business Needs

    - by HCM-Oracle
    By Jim Vonick A key goal for the deployment of your Oracle Global HR Cloud applications is to accelerate the implementation and adoption of your applications, so that your business can start realizing all of the benefits that this rich solution offers.    Implementation team members need to have the skills and knowledge to ensure a smooth, rapid and successful implementation of your applications. During set-up, you want to optimize the configuration to best meet your business needs. In order to do this you need to understand the foundation and configuration options of your applications, so that decisions can be made during set-up that best align with your business.  To that end product level implementation training is recommended for Oracle Global HR Cloud deployments. Training For Implementation Team Members and Consultants Fusion Applications: HCM Security: Learn how to implement security for Oracle Fusion HCM applications by creating and customizing roles. You'll learn how to create security profiles to restrict data access, provision roles to users, create and manage user accounts, and verify security setup. Fusion Applications: HCM Global Human Resources: Learn how to set up your enterprise and workforce structures, how to perform functional tasks, and how to configure security for Global Human Resources data. Fusion Applications: HCM Compensation: Learn how to implement, configure, and use Oracle Fusion Compensation to manage base pay, individual compensation, workforce compensation, and total compensation statements. Fusion Applications: HCM Benefits: This course teaches you to implement, configure and manage Oracle Fusion Benefits, including how to implement benefit plans and programs.  Fusion Applications: HCM Payroll Implementation (US): This course provides implementation training for payroll managers or payroll administrators. Learn how to process payroll to ensure accurate setup results.  Learn More: See all Fusion HCM Training Jim Vonick is a Senior Product Manager with Oracle University focusing on training for Oracle Applications and Industry Solutions.

    Read the article

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • Process Oracle OER Events using a simple Web Service

    - by Bob Webster
    This post provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types. The web service example implements the following: When a new Asset is Submitted to OER: The Asset is automatically Accepted by a defined user. When an Asset is Accepted: The Asset is automatically assigned  to a defined user for review. If the accepted asset is of type Service The Version meta data attribute is set based on the version id contained in the suffix of the Service Namespace.      When an Asset is Registered: If the registered Asset is of type Service The related Assets ( Interface and Endpoint are also automatically registered. The sample web service is not intended to replace the out of the Box OER BPM Based workflows, but the service can be utilized in cases where only simple automation is required and the developer has a Java skill set. The service is a lightweight web application that can be easily deployed to the same server as OER or on a different server. Read the complete post here

    Read the article

  • JSP Precompilation for ADF Applications

    - by Duncan Mills
    A question that comes up from time to time, particularly in relation to build automation, is how to best pre-compile the .jspx and .jsff files in an ADF application. Thus ensuring that the app is ready to run as soon as it's installed into WebLogic. In the normal run of things, the first poor soul to hit a page pays the price and has to wait a little whilst the JSP is compiled into a servlet. Everyone else subsequently gets a free lunch. So it's a reasonable thing to want to do... Let Me List the Ways So forth to Google (other search engines are available)... which lead me to a fairly old article on WLDJ - Removing Performance Bottlenecks Through JSP Precompilation. Technololgy wise, it's somewhat out of date, but the one good point that it made is that it's really not very useful to try and use the precompile option in the weblogic.xml file. That's a really good observation - particularly if you're trying to integrate a pre-compile step into a Hudson Continuous Integration process. That same article mentioned an alternative approach for programmatic pre-compilation using weblogic.jspc. This seemed like a much more useful approach for a CI environment. However, weblogic.jspc is now obsoleted by weblogic.appc so we'll use that instead.  Thanks to Steve for the pointer there. And So To APPC APPC has documentation - always a great place to start, and supports usage both from Ant via the wlappc task and from the command line using the weblogic.appc command. In my testing I took the latter approach. Usage, as the documentation will show you, is superficially pretty simple.  The nice thing here, is that you can pass an existing EAR file (generated of course using OJDeploy) and that EAR will be updated in place with the freshly compiled servlet classes created from the JSPs. Appc takes care of all the unpacking, compiling and re-packing of the EAR for you. Neat.  So we're done right...? Not quite. The Devil is in the Detail  OK so I'm being overly dramatic but it's not all plain sailing, so here's a short guide to using weblogic.appc to compile a simple ADF application without pain.  Information You'll Need The following is based on the assumption that you have a stand-alone WLS install with the Application Development  Runtime installed and a suitable ADF enabled domain created. This could of course all be run off of a JDeveloper install as well 1. Your Weblogic home directory. Everything you need is relative to this so make a note.  In my case it's c:\builds\wls_ps4. 2. Next deploy your EAR as normal and have a peek inside it using your favourite zip management tool. First of all look at the weblogic-application.xml inside the EAR /META-INF directory. Have a look for any library references. Something like this: <library-ref>    <library-name>adf.oracle.domain</library-name> </library-ref>   Make a note of the library ref (adf.oracle.domain in this case) , you'll need that in a second. 3. Next open the nested WAR file within the EAR and then have a peek inside the weblogic.xml file in the /WEB-INF directory. Again  make a note of the library references. 4. Now start the WebLogic as per normal and run the WebLogic console app (e.g. http://localhost:7001/console). In the Domain Structure navigator, select Deployments. 5. For each of the libraries you noted down drill into the library definition and make a note of the .war, .ear or .jar that defines the library. For example, in my case adf.oracle.domain maps to "C:\ builds\ WLS_PS4\ oracle_common\ modules\ oracle. adf. model_11. 1. 1\ adf. oracle. domain. ear". Note the extra spaces that are salted throughout this string as it is displayed in the console - just to make it annoying, you'll have to strip these out. 6. Finally you'll need the location of the adfsharebean.jar. We need to pass this on the classpath for APPC so that the ADFConfigLifeCycleCallBack listener can be found. In a more complex app of your own you may need additional classpath entries as well.  Now we're ready to go, and it's a simple matter of applying the information we have gathered into the relevant command line arguments for the utility A Simple CMD File to Run APPC  Here's the stub .cmd file I'm using on Windows to run this. @echo offREM Stub weblogic.appc Runner setlocal set WLS_HOME=C:\builds\WLS_PS4 set ADF_LIB_ROOT=%WLS_HOME%\oracle_common\modulesset COMMON_LIB_ROOT=%WLS_HOME%\wlserver_10.3\common\deployable-libraries set ADF_WEBAPP=%ADF_LIB_ROOT%\oracle.adf.view_11.1.1\adf.oracle.domain.webapp.war set ADF_DOMAIN=%ADF_LIB_ROOT%\oracle.adf.model_11.1.1\adf.oracle.domain.ear set JSTL=%COMMON_LIB_ROOT%\jstl-1.2.war set JSF=%COMMON_LIB_ROOT%\jsf-1.2.war set ADF_SHARE=%ADF_LIB_ROOT%\oracle.adf.share_11.1.1\adfsharembean.jar REM Set up the WebLogic Environment so appc can be found call %WLS_HOME%\wlserver_10.3\server\bin\setWLSEnv.cmd CLS REM Now compile away!java weblogic.appc -verbose -library %ADF_WEBAPP%,%ADF_DOMAIN%,%JSTL%,%JSF% -classpath %ADF_SHARE% %1 endlocal Running the above on a target ADF .ear  file will zip through and create all of the relevant compiled classes inside your nested .war file in the \WEB-INF\classes\jsp_servlet\ directory (but don't take my word for it, run it and take a look!) And So... In the immortal words of  the Pet Shop Boys, Was It Worth It? Well, here's where you'll have to do your own testing. In  my case here, with a simple ADF application, pre-compilation shaved an non-scientific "3 Elephants" off of the initial page load time for the first access of each page. That's a pretty significant payback for such a simple step to add into your CI process, so why not give it a go.

    Read the article

  • The Oracle OpenWorld 2012 Call for Papers Closes April 9

    - by Kerrie Foy
    It is On! Oracle OpenWorld 2012 Call for Papers is closes April 9.   This year's OpenWorld event is September 30  - October 4, Moscone Center, San Francisco. Oracle OpenWorld is among the world’s largest industry events for good reason. It offers a vast array of learning and networking opportunities in one of the planet’s great cities.  And one of the key reasons for its popularity is the prominence of presentations by customers. If you would like to deliver a presentation based on your experience, now is the time to submit your abstract for review by the selection panel. The competition is strong: roughly 18% of entries are accepted each year from more than 3,000 submissions. Review panels are made up of experts both internal and external to Oracle. Successful submissions often (but not exclusively) focus on customer successes, how-tos, or best practices. http://www.oracle.com/openworld/call-for-papers/information/index.html What is in it for you? Recognition, for one thing. Accepted sessions are publicized in the content catalog, which goes live in mid-June, and sessions given by external speakers often prove the most popular. Plus, accepted speakers get a complimentary pass to Oracle OpenWorld with access to all sessions and networking events- that could save you up to $2,595! Be sure designate your session for inclusion in the correct track by selecting  “APPLICATIONS: Product Lifecycle Management from the Primary Track drop down menu. Looking forward to seeing you at this year's OpenWorld!

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • Developer Webinar Toay:"Publishing IPS PAckages"

    - by user13333379
    Oracle's Solaris Organization is pleased to announce a Technical Webinar for Developers on Oracle Solaris 11: "Publishing IPS Packages" By Eric Reid (Principal Software Engineer) today June 19, 2012 9:00 AM PDT This bi-weekly webinar series (every other Tuesday @ 9 a.m. PT) is designed for ISVs, IHVs, and Application Developers who want a deep-dive overview about how they can deploy Oracle Solaris 11 into their application environments. This series will provide you the unique opportunity to learn directly from Oracle Solaris ISV Engineers and will include LIVE Q&A via chat with subject matter experts from each topic area. Any OTN member can register for this free webinar here.  Today's webinar is a deep dive into IPS. The attendees of the initial IPS webinar asked for more information around this topic. Eric Reid who worked with leading software vendors (ISVs) to migrate Solaris 10 System V packages to IPS will share his experience with us. 

    Read the article

  • Fight for your rights as a video gamer.

    - by Chris Williams
    Soon, the U.S. Supreme Court may decide whether to hear a case that could have a lasting impact on computer and video games. The case before the Court involves a law passed by the state of California attempting to criminalize the sale of certain computer and video games. Two previous courts rejected the California law as unconstitutional, but soon the Supreme Court could have the final say. Whatever the Court's ruling, we must be prepared to continue defending our rights now and in the future. To do so, we need a large, powerful movement of gamers to speak with one voice and show that we won't sit back while lawmakers try to score political points by scapegoating video games and treating them differently than books, movies, and music. If the Court decides to hear the case, we're going to need thousands of activists like you who can help defend computer and video games by writing letters to editors, calling into talk radio stations, and educating Americans about our passion for and appreciation of computer and video games. You can help build this movement right now by inviting all your friends and fellow gamers to join the Video Game Voters Network. Use our simple tool to send an email to everyone you know asking them to stand up for gaming rights: http://videogamevoters.org/movement You can also help spread the word through Facebook and Twitter, or you can simply forward this email to everyone you know and ask them to sign up at videogamevoters.org. Time after time, courts continue to reject politicians' efforts to restrict the sale of computer and video games. But that doesn't mean the politicians will stop trying anytime soon -- in fact, it means they're likely to ramp up their efforts even more. To stop them, we must make it clear that gamers will continue to stand up for free speech -- and that the numbers are on our side. Help make sure we're ready and able to keep fighting for our gaming rights. Spread the word about the Video Game Voters Network right now: http://videogamevoters.org/movement Thank you. -- Video Game Voters Network

    Read the article

  • Find related field in Dynamics AX

    - by DAXShekhar
    static void findRelatedFieldId(Args _args) {     SysDictTable    dictTable = new SysDictTable(tablenum(InventTrans));     int             i,j;     SysDictRelation dictRelation;     TableId         externId = tablenum(SalesLine);     IndexId         indexId;     ; // Search the explicit relations     for (i = 1; i <= dictTable.relationCnt(); ++i)     {         dictRelation = new SysDictRelation(dictTable.id());         dictRelation.loadNameRelation(dictTable.relation(i)); // If it is a 'relation' then you use externTable(), but for extended data types you use table() (see next block).         if (SysDictRelation::externId(dictRelation) == externId)         {             for (j =1; j <= dictRelation.lines(); j++)             {                 info(strfmt("%1", dictRelation.lineExternTableValue(j)));                 info(fieldid2name(dictRelation.externTable(),dictRelation.lineExternTableValue(j)));             }         }     }     info(strfmt("%1", dictRelation.loadFieldRelation(fieldnum(InventTrans, InventTransId)))); }

    Read the article

  • IRM and Consumerization

    - by martin.abrahams
    As the season of rampant consumerism draws to its official close on 12th Night, it seems a fitting time to discuss consumerization - whereby technologies from the consumer market, such as the Android and iPad, are adopted by business organizations. I expect many of you will have received a shiny new mobile gadget for Christmas - and will be expecting to use it for work as well as leisure in 2011. In my case, I'm just getting to grips with my first Android phone. This trend developed so much during 2010 that a number of my customers have officially changed their stance on consumer devices - accepting consumerization as something to embrace rather than resist. Clearly, consumerization has significant implications for information control, as corporate data is distributed to consumer devices whether the organization is aware of it or not. I daresay that some DLP solutions can limit distribution to some extent, but this creates a conflict between accepting consumerization and frustrating it. So what does Oracle IRM have to offer the consumerized enterprise? First and foremost, consumerization does not automatically represent great additional risk - if an enterprise seals its sensitive information. Sealed files are encrypted, and that fundamental protection is not affected by copying files to consumer devices. A device might be lost or stolen, and the user might not think to report the loss of a personally owned device, but the data and the enterprise that owns it are protected. Indeed, the consumerization trend is another strong reason for enterprises to deploy IRM - to protect against this expansion of channels by which data might be accidentally exposed. It also enables encryption requirements to be met even though the enterprise does not own the device and cannot enforce device encryption. Moving on to the usage of sealed content on such devices, some of our customers are using virtual desktop solutions such that, in truth, the sealed content is being opened and used on a PC in the normal way, and the user is simply using their device for display purposes. This has several advantages: The sensitive documents are not actually on the devices, so device loss and theft are even less of a worry The enterprise has another layer of control over how and where content is used, as access to the virtual solution involves another layer of authentication and authorization - defence in depth It is a generic solution that means the enterprise does not need to actively support the ever expanding variety of consumer devices - the enterprise just manages some virtual access to traditional systems using something like Citrix or Remote Desktop services. It is a tried and tested way of accessing sealed documents. People have being using Oracle IRM in conjunction with Citrix and Remote Desktop for several years. For some scenarios, we also have the "IRM wrapper" option that provides a simple app for sealing and unsealing content on a range of operating systems. We are busy working on other ways to support the explosion of consumer devices, but this blog is not a proper forum for talking about them at this time. If you are an Oracle IRM customer, we will be pleased to discuss our plans and your requirements with you directly on request. You can be sure that the blog will cover the new capabilities as soon as possible.

    Read the article

  • SSL: Alternative Netzwerkverschlüsselung für Oracle Datenbanken

    - by Heinz-Wilhelm Fabry (DBA Community)
    Das Netzwerk bietet eine extrem kritische Angriffsfläche in jeder Sicherheitsarchitektur. Einerseits ist kaum zu verhindern, dass externe oder auch interne Angreifer auf das Netzwerk zugreifen: So sieht etwa jemand, der Zugriff auf einen sogenannten Netzwerksniffer hat (zum Beispiel auf das weit verbreitete Wireshark) alle Daten, die im Netzwerk übertragen werden. Andererseits gehen alle Befehle, die an eine Oracle Datenbank geschickt werden - mit Ausnahme der Informationen zu Benutzernamen und Passwort beim LOGIN - sowie alle Daten, die aus einer Datenbank ausgegeben werden, im Klartext über das Netzwerk. Das Risiko,  über das Netzwerk Daten 'zu verlieren', ist daher nur in den Griff zu bekommen, wenn man den Datenstrom verschlüsselt. Die einfachste Lösung zur Verschlüsselung des Datenstroms bietet ASO mit der sogenannten nativen Verschlüsselung über SQL*Net. Sie ist bei Bedarf und ohne Neustart der Datenbank ganz einfach und im Extremfall mit einer einzigen Einstellung in der Konfigurationsdatei SQLNET.ORA zu implementieren, nämlich mitSQLNET.ENCRYPTION_SERVER = REQUIREDWegen der einfachen Umsetzung wird diese Variante von der ganz überwiegenden Mehrheit der ASO Anwender bevorzugt eingesetzt. Im Rahmen der Datenbank Community wurde das Verfahren auch schon näher betrachtet. Allerdings lässt sich mit der ASO auch die Verschlüsselung des Netzwerks über SSL implementieren. Wie das aufzusetzen ist beschreibt dieser Tipp. Er versteht sich als erstes How-To zur Einarbeitung in die Thematik.

    Read the article

  • ArchBeat Link-o-Rama for November 8, 2012

    - by Bob Rhubart
    Webcast: Meeting Customer Expectations in the New Age of Retail Keep your eye on this live webcast as Sanjeev Sharma (Principal Product Director, Oracle Exalogic), Kelly Goetsch (Senior Principal Product Manager, Oracle Commerce), and Dan Conway (Senior Product Manager, Oracle Retail) offer real-world examples of business value derived by running customer-facing applications on Oracle Engineered Systems. Live, Thursday Nov 8, 10am PT/ 1pm ET. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." — Irving Wladawsky-Berger SOA Still Not Dead: Ratification of Governance Standard Highlights SOA’s Continued Relevance So just about the time I dig into Google Trends to learn that the conversation about governance peaked in 2004, along comes all this InfoQ article by Richard Seroter. And of course you've already listened to the OTN Archbeat Podcast about governance, right? Right? Implications of Java 6 End of Public Updates for Oracle E-Business Suite Users | Steven Chan The short version is: "Nothing will change for EBS users after February 2013." According to Steven Chan, "EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6." You'll find additional information on Steven's blog. ADF Mobile Custom Javascript – iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. How Data and BPM are married to get the right information to the right people at the right time | Leon Smiers "Business Process Management…supports a large group of stakeholders within an organization, all with different needs," says Oracle ACE Leon Smiers. "End-to-end processes typically run across departments, stakeholders and applications, and can often have a long life-span. So how do organizations provide all stakeholders with the information they need?" Leon provides answers in this post. Thought for the Day "(When) asking skilled architects…what they do when confronted with highly complex problems…(they) would most likely answer, 'Just use Common Sense.' (A) better expression than 'common sense' is 'contextual sense' — a knowledge of what is reasonable within a given content. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • From The OU Classrooms...

    - by rajeshr
    No excuses for not doing this systematically, and I'm trying my best to break this bad habit of bulk uploads of class photographs and do it regularly instead. But for the time being, please forgive my laziness and live by my mass introduction of all fun loving, yet talented folks whom I met in the OU classrooms during the last three months or so through these picture essay that follow. It's unfortunate, I don't get to do this for my Live Virtual Classes for obvious reason,but let me take a moment to thank them all as well for choosing OU programs on various products. Thanks again to each one for memorable moments in the OU classrooms: Pillar Axiom MaxRep session at Bangkok. For detailed information on the OU course on Pillar Axiom Max Rep, access this page. Pillar Axiom SAN Administration Session at Bangkok. Know more about the product here. Details on the Pillar Axiom training program from Oracle University can be found here. Oracle Solaris ZFS Administration & Oracle Solaris Containers session at Hyderabad. Read more about ZFS here. Gain information on Solaris Containers by going here. Oracle University courses on Solaris 10 and its features can be viewed at this page. Oracle Solaris Cluster program at Hyderabad. Here's the OU landing page for the training programs on Oracle Solaris Cluster. Oracle Solaris 11 Administration Session at Bangalore. If you are interested to get trained on Solaris 11, get more details at this webpage. Sun Identity Manager Deployment Fundamentals session at Bangalore. The product is n.k.a Oracle Waveset IDM. Click here to get detailed description on this fabulous hands on training program. With Don Kawahigashi at Taipei for Pillar Axiom Storage training.

    Read the article

  • Applying WCAG 2.0 to Non-Web ICT: second draft published from WCAG2ICT Task Force - for public review

    - by Peter Korn
    Last Thursday the W3C published an updated Working Draft of Guidance on Applying WCAG 2.0 to Non-Web Information and Communications Technologies. As I noted last July when the first draft was published, the motivation for this guidance comes from the Section 508 refresh draft, and also the European Mandate 376 draft, both of which seek to apply the WCAG 2.0 level A and AA Success Criteria to non-web ICT documents and software. This second Working Draft represents a major step forward in harmonization with the December 5th, 2012 Mandate 376 draft documents, including specifically Draft EN 301549 "European accessibility requirements for public procurement of ICT products and services". This work greatly increases the likelihood of harmonization between the European and American technical standards for accessibility, for web sites and web applications, non-web documents, and non-web software. As I noted last October at the European Policy Centre event: "The Accessibility Act – Ensuring access to goods and services across the EU", and again last month at the follow-up EPC event: "Accessibility - From European challenge to global opportunity", "There isn't a 'German Macular Degernation', a 'French Cerebral Palsy', an 'American Autism Spectrum Disorder'. Disabilities are part of the human condition. They’re not unique to any one country or geography – just like ICT. Even the built environment – phones, trains and cars – is the same worldwide. The definition of ‘accessible’ should be global – and the solutions should be too. Harmonization should be global, and not just EU-wide. It doesn’t make sense for the EU to have a different definition to the US or Japan." With these latest drafts from the W3C and Mandate 376 team, we've moved a major step forward toward that goal of a global "definition of 'accessible' ICT." I strongly encourage all interested parties to read the Call for Review, and to submit comments during the current review period, which runs through 15 February 2013. Comments should be sent to public-wcag2ict-comments-AT-w3.org. I want to thank my colleagues on the WCAG2ICT Task Force for the incredible time and energy and expertise they brought to this work - including particularly my co-authors Judy Brewer, Loïc Martínez Normand, Mike Pluke, Andi Snow-Weaver, and Gregg Vanderheiden; and the document editors Michael Cooper, and Andi Snow-Weaver.

    Read the article

  • Microsoft MVP Again for 2011

    - by Vincent Maverick Durano
    Normal 0 false false false EN-PH X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} I just got a great news from Microsoft that I’m re-awarded as Microsoft MVP (Most Valuable Professional) for this year.  This is my 3rd year in a row as an MVP and  I’m of course very happy about and feel honored by it. Woohoo!! Here’s the Proof =} Dear Vincent Maverick Durano, Congratulations! We are pleased to present you with the 2011 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in ASP.NET/IIS technical communities during the past year. The Microsoft MVP Award provides us the unique opportunity to celebrate and honor your significant contributions and say "Thank you for your technical leadership."     BIG thanks to Microsoft, my MVP Lead Lilian Quek, readers, and everyone who has supported me!!!

    Read the article

  • JPedal Action for Converting PDF to JavaFX

    - by Geertjan
    The question of the day comes from Mark Stephens, from JPedal (JPedal is the leading 100% Java PDF library, providing a Java PDF viewer, PDF to image conversion, PDF printing or adding PDF search and PDF extraction features), in the form of a screenshot: The question is clear. By looking at the annotations above, you can see that Mark has an ActionListener that has been bound to the right-click popup menu on PDF files. Now he needs to get hold of the file to which the Action has been bound. How, oh  how, can one get hold of that file? Well, it's simple. Leave everything you see above exactly as it is but change the Java code section to this: public final class PDF2JavaFXContext implements ActionListener {     private final DataObject context;     public PDF2JavaFXContext(DataObject context) {         this.context = context;     }     public void actionPerformed(ActionEvent ev) {         FileObject fo = context.getPrimaryFile();         File theFile = FileUtil.toFile(fo);         //do something with your file...     } } The point is that the annotations at the top of the class bind the Action to either Actions.alwaysEnabled, which is a factory method for creating always-enabled Actions, or Actions.context, which is a factory method for creating context-sensitive Actions. How does the Action get bound to the factory method? The annotations are converted, when the module is compiled, into XML registration entries in the "generated-layer.xml", which you can find in your "build" folder, in the Files window, after building the module. In Mark's case, since the Action should be context-sensitive to PDF files, he needs to bind his PDF2JavaFXContext ActionListener (which should probably be named "PDF2JavaFXActionListener", since the class is an ActionListener) to Actions.context. All he needs to do that is pass in the object he wants to work with into the constructor of the ActionListener. Now, when the module is built, the annotation processor is going to take the annotations and convert them to XML registration entries, but the constructor will also be checked to see whether it is empty or not. In this case, the constructor isn't empty, hence the Action should be context-sensitive and so the ActionListener is bound to Actions.context. The Actions.context will do all the enablement work for Mark, so that he will not need to provide any code for enabling/disabling the Action. The Action will be enabled whenever a DataObject is selected. Since his Action is bound to Nodes in the Projects window that represent PDF files, the Action will always be enabled whenever Mark right-clicks on a PDF Node, since the Node exposes its own DataObject. Once Mark has access to the DataObject, he can get the underlying FileObject via getPrimaryFile and he can then convert the FileObject to a java.io.File via FileUtil.getConfigFile. Once he's got the java.io.File, he can do with it whatever he needs. Further reading: http://bits.netbeans.org/dev/javadoc/

    Read the article

  • VIDEO: Improved user experience of PeopleTools 8.50 a hit with customer

    - by PeopleTools Strategy Team
    New and upgraded features in PeopleTools 8.50 really help boost productivity, says Oracle customer Dennis Mesler, of Boise, Inc. From improved navigational flows to enhanced grids to new features such as type-ahead or auto-suggest, users can expect to save time and training with PeopleTools 8.50. To hear more about this customer's opinion on the user experience of PeopleTools 8.50, watch his video at HERE

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >