Search Results

Search found 10285 results on 412 pages for 'enterprise repository'.

Page 12/412 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • What are requirements for a successful SOA?

    - by Amir Rezaei
    I’m an EA in an organisation with 10000+ employees. Strategically we are heading towards SOA. Currently I’m researching about SOA’s and creating a road map and I have come over many blogs that talk about “SOA is dead”. We can all agree that SOA is not just web-services. The problem is that I have hard to find any information on the reason behind SOA-fail stories in enterprises. What went bad and what went right? My question is: What are common SOA mistakes in enterprises that make SOA fail in long term? Is the any best practice for SOA? What are the most important requirements for a successful SOA in an enterprise? It would be good feedback towards our SOA strategy in this organisation. I have tried to narrow down the question, but it’s hard due to the nature of the question.

    Read the article

  • Improve Engineering & Construction Project Productivity

    - by [email protected]
    Driving successful project delivery and providing greater value and return for all stakeholders are key goals for firms in the engineering and construction industry. However, increasingly complex construction projects, compressed schedules, ineffective collaboration among project stakeholders, and limited interoperability, can get in the way of these goals and lead to reduced productivity. What E&C firms need are solutions that will improve global team collaboration, optimize processes and better communicate electronic project data. Check out the AutoVue for Engineering and Construction Solution Brief and learn how AutoVue enterprise visualization solutions can: - improve global project collaboration and communication - improve data interoperability - support virtual design and construction projects - improve change management and maintain accountability

    Read the article

  • Agile Entity Framework 4 Repository: Part 6: Mocks & Unit Tests

    I did finish this series, honest I did. But not in the blog. Ive shown this in a number of conferences and even in my book, but I never came back and wrote it all down. In fact, I had the whole solutino written before I began the series, but it has gone through a lot of changes. Where did I leave off? Agile Entity Framework 4 Repository: Part 1- Model and POCO Classes Agile Entity Framework 4 Repository: Part 2- The Repository Agile EF4 Repository: Part 3 -Fine Tuning the Repository Agile...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Premera Blue Cross Deploys PeopleSoft Enterprise 9.1 Human Capital Management, Financial Management, Enterprise Learning Management and Enterprise Portal Solutions

    - by jay.richey
    Optimum Solutions Implements Oracle's PeopleSoft Enterprise 9.1 at Premera Blue Cross Premera chose to upgrade to the latest version of PeopleSoft to help the company achieve its strategic goals, which include building and maintaining a skilled employee team that enables the company to deliver highly efficient and valuable service to plan subscribers, sponsors, and healthcare providers. Its decision was influenced by the key capabilities in PeopleSoft Talent Management 9.1, as well as the common technology enhancements for the PeopleSoft PeopleTools 8.50 toolset across all business process areas, which has helped Premera to maximize process automation, increased ease of use, and minimize long term IT support overhead. Read more...

    Read the article

  • WPF tree data binding model & repository

    - by Am
    Hi, I have a well defined tree repository. Where I can rename items, move them up, down, etc. Add new and delete. The data is stored in a table as follows: Index Parent Label Left Right 1 0 root 1 14 2 1 food 2 7 3 2 cake 3 4 4 2 pie 5 6 5 1 flowers 8 13 6 5 roses 9 10 7 5 violets 11 12 Representing the following tree: (1) root (14) (2) food (7) (8) flowers (13) (3) cake (4) (5) pie (6) (9) roeses (10) (11) violets (12) or root food cake pie flowers roses violets Now, my problem is how to represent this in a bindable way, so that a TreeView can handle all the possible data changes? Renaming is easy, all I need is to make the label an updatble field. But what if a user moves flowers above food? I can make the relevant data changes, but they cause a complete data change to all other items in the tree. And all the examples I found of bindable hierarchies are good for non static trees.. So my current (and bad) solution is to reload the displayed tree after relocation change. Any direction will be good. Thanks

    Read the article

  • Enterprise Manager 12c ? ZFS Storage Appliance

    - by user13138569
    ?????????????? Enterprise Manager 12c ??? Sun ZFS Storage Appliance ????????????????????? ???Enterprise Manager ?? Sun ZFS Storage Appliance ?????????????? Enterprise Manager ????????????????? 3??? Sun ZFS Stoarage Appliance ??????????????????? My Oracle Support ???Oracle Technology Network ???????????????????????????? ?????????????????????????? Oracle ZFS Storage Appliance Plugin Downloads Sun ZFS Storage Appliance ????????????????????????????? P.3 ???????????Appliance ???????????? Workflow ?????????? Enterprise Manager ???????????? P.10 ???????????????????????????????????????????Enterprise Manager 11g ??????????????????????? ??????????????????????????? ??????????????????????Sun ZFS Storage Appliance ??????????Database ???????????????????????????????Enterprise Manager ???????????????????????

    Read the article

  • What's New in Database Lifecycle Management in Enterprise Manager 12c Release 3

    - by HariSrinivasan
    Enterprise Manager 12c Release 3 includes improvements and enhancements across every area of the product. This blog provides an overview of the new and enhanced features in the Database Lifecycle Management area. I will deep dive into specific features more in depth in subsequent posts. "What's New?"  In this release, we focused on four things: 1. Lifecycle Management Support for new Database12c - Pluggable Databases 2. Management of long running processes, such as a security patch cycle (Change Activity Planner) 3. Management of large number of systems by · Leveraging new framework capabilities for lifecycle operations, such as the new advanced ‘emcli’ script option · Refining features such as configuration search and compliance 4. Minor improvements and quality fixes to existing features · Rollback support for Single instance databases · Improved "OFFLINE" Patching experience · Faster collection of ORACLE_HOME configurations Lifecycle Management Support for new Database 12c - Pluggable Databases Database 12c introduces Pluggable Databases (PDBs), the brand new addition to help you achieve your consolidation goals. Pluggable databases offer unprecedented consolidation at database level and native lifecycle verbs for creating, plugging and unplugging the databases on a container database (CDB). Enterprise Manager can supplement the capabilities of pluggable databases by offering workflows for migrating, provisioning and cloning them using the software library and the deployment procedures. For example, Enterprise Manager can migrate an existing database to a PDB or clone a PDB by storing a versioned copy in the software library. One can also manage the planned downtime related to patching by  migrating the PDBs to a new CDB. While pluggable databases offer these exciting features, it can also pose configuration management and compliance challenges if not managed properly. Enterprise Manager features like inventory management, topology associations and configuration search can mitigate the sprawl of PDBs and also lock them to predefined golden standards using configuration comparison and compliance rules. Learn More ... Management of Long Running datacenter processes - Change Activity Planner (CAP) Currently, customers resort to cumbersome methods to create, execute, track and monitor change activities within their data center. Some customers use traditional tools such as spreadsheets, project planners and in-house custom built solutions. Customers often have weekly sync up meetings across stake holders to collect status and updates. Some of the change activities, for example the quarterly patch set update (PSU) patch rollouts are not single tasks but processes with multiple tasks. Some of those tasks are performed within Enterprise Manager Cloud Control (for example Patch) and some are performed outside of Enterprise Manager Cloud Control. These tasks often run for a longer period of time and involve multiple people or teams. Enterprise Manger Cloud Control supports core data center operations such as configuration management, compliance management, and automation. Enterprise Manager Cloud Control release 12.1.0.3 leverages these capabilities and introduces the Change Activity Planner (CAP). CAP provides the ability to plan, execute, and track change activities in real time. It covers the typical datacenter activities that are spread over a long period of time, across multiple people and multiple targets (even target types). Here are some examples of Change Activity Process in a datacenter: · Patching large environments (PSU/CPU Patching cycles) · Upgrading large number of database environments · Rolling out Compliance Rules · Database Consolidation to Exadata environments CAP provides user flows for Compliance Officers/Managers (incl. lead administrators) and Operators (DBAs and admins). Managers can create change activity plans for various projects, allocate resources, targets, and groups affected. Upon activation of the plan, tasks are created and automatically assigned to individual administrators based on target ownership. Administrators (DBAs) can identify their tasks and understand the context, schedules, and priorities. They can complete tasks using Enterprise Manager Cloud Control automation features such as patch plans (or in some cases outside Enterprise Manager). Upon completion, compliance is evaluated for validations and updates the status of the tasks and the plans. Learn More about CAP ...  Improved Configuration & Compliance Management of a large number of systems Improved Configuration Comparison:  Get to the configuration comparison results faster for simple ad-hoc comparisons. When performing a 1 to 1 comparison, Enterprise Manager will perform the comparison immediately and take the user directly to the results without having to wait for a job to be submitted and executed. Flattened system comparisons reduce comparison setup time and reduce complexity. In addition to the previously existing topological comparison, users now have an option to compare using a “flattened” methodology. Flattening means to remove duplicate target instances within the systems and remove the hierarchy of member targets. The result are much easier to spot differences particularly for specific use cases like comparing patch levels between complex systems like RAC and Fusion Apps. Improved Configuration Search & Advanced EMCLI Script option for Mass Automation Enterprise manager 12c introduces a new framework level capability to be able to script and stitch together multiple tasks using EMCLI. This powerful capability can be leveraged for lifecycle operations, especially when executing a task over a large number of targets. Specific usages of this include, retrieving a qualified list of targets using Configuration Search and then using the resultset for automation. Another example would be executing a patching operation and then re-executing on targets where it may have failed. This is complemented by other enhancements, such as a better usability for designing reusable configuration searches. IN EM 12c Rel 3, a simplified UI makes building adhoc searches even easier. Searching for missing patches is a common use of configuration search. This required the use of the advanced options which are now clearly defined and easy to use. Perform “Configuration Search” using the EMCLI. Users can find and execute Configuration Searches from the EMCLI which can be extremely useful for building sophisticated automation scripts. For an example, Run the Search named “Oracle Databases on Exadata” which finds all Database targets running on top of Exadata. Further filter the results by refining by options like name, host, etc.. emcli get_targets -config_search="Databases on Exadata" –target_name="exa%“ Use this in powerful mass automation operations using the new emcli script option. For example, to solve the use case of – Finding all DBs running on Exadata and housing E-Biz and Patch them. Create a Python script with emcli functions and invoke it in the new EMCLI script option shell. Invoke the script in the new EMCLI with script option directly: $<path to emcli>/emcli @myPSU_Patch.py Richer compliance content:  Now over 50 Oracle Provided Compliance Standards including new standards for Pluggable Database, Fusion Applications, Oracle Identity Manager, Oracle VM and Internet Directory. 9 Oracle provided Real Time Monitoring Standards containing over 900 Compliance Rules across 500 Facets. These new Real time Compliance Standards covers both Exadata Compute nodes and Linux servers. The result is increased Oracle software coverage and faster time to compliance monitoring on Exadata. Enhancements to Patch Management: Overhauled "OFFLINE" Patching experience: Simplified Patch uploads UI to improve the offline experience of patching. There is now a single step process to get the patches into software library. Customers often maintain local repositories of patches, sometimes called software depots, where they host the patches downloaded from My Oracle Support. In the past, you had to move these patches to your desktop then upload them to the Enterprise Manager's Software library through the Enterprise Manager Cloud Control user interface. You can now use the following EMCLI command to upload multiple patches directly from a remote location within the data center: $emcli upload_patches -location <Path to Patch directory> -from_host <HOSTNAME> The upload process filters all of the new patches, automatically selects the relevant metadata files from the location, and uploads the patches to software library. Other Improvements:  Patch rollback for single instance databases, new option in the Patch Plan to rollback the patches added to the patch plans. Upon execution, the procedure would rollback the patch and the SQL applied to the single instance Databases. Improved and faster configuration collection of Oracle Home targets can enable more reliable automation at higher level functions like Provisioning, Patching or Database as a Service. Just to recap, here is a list of database lifecycle management features:  * Red highlights mark – New or Enhanced in the Release 3. • Discovery, inventory tracking and reporting • Database provisioning including o Migration to Pluggable databases o Plugging and unplugging of pluggable databases o Gold image based cloning o Scaling of RAC nodes •Schema and data change management •End-to-end patch management in online and offline modes, including o Patch advisories in online (connected with My Oracle Support) and offline mode o Patch pre-deployment analysis, deployment and rollback (currently only for single instance databases) o Reporting • Upgrade planning and execution of the upgrade process • Configuration management including • Compliance management with out-of-box content • Change Activity Planner for planning, designing and tracking long running processes For more information on Enterprise Manager’s database lifecycle management capabilities, visit http://www.oracle.com/technetwork/oem/lifecycle-mgmt/index.html

    Read the article

  • Google I/O 2010 - OpenSocial in the Enterprise

    Google I/O 2010 - OpenSocial in the Enterprise Google I/O 2010 - Best practices for implementing OpenSocial in the Enterprise Social Web, Enterprise 201 Mark Weitzel, Matt Tucker, Mark Halvorson, Helen Chen, Chris Schalk Enterprise deployments of OpenSocial technologies brings an additional set of considerations that may not be apparent in a traditional social network implementation. In this session, several enterprise vendors will demonstrate how they've been working together to address these issues in a collection of "Best Practices". This session will also provide a review of existing challenges for enterprise implementations of OpenSocial. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 5 0 ratings Time: 38:23 More in Science & Technology

    Read the article

  • September issue of the Enterprise Manager Indepth Newsletter

    - by Javier Puerta
    The September issue of the Enterprise Manager Indepth Newsletter is now available here  Featured articles include: Oracle OpenWorld Preview: Don't-Miss Sessions, Hands-on Labs, and MoreBecause of the rapid and widespread adoption of Oracle Enterprise Manager 12c since its launch at Oracle OpenWorld 2011, conference organizers are expecting Oracle Enterprise Manager sessions to attract record crowds at Oracle OpenWorld 2012. Read More Oracle Cloud Builder Summit—Zero to Enterprise Cloud in Two HoursIn August, Oracle launched the worldwide Oracle Cloud Builder Summit series, an event where attendees learn firsthand how to plan, deploy, and manage an enterprise private cloud using Oracle Enterprise Manager 12c—all in a few hours. Read More WEBCASTS Reduce Database Testing Efforts While Maximizing ROIWatch this on-demand Webcast demonstrating how to manage database and system changes with confidence using Oracle Real Application Testing. Viewers will be among the first to hear results from Forrester Consulting's commissioned, multicustomer study, “Total Economic Impact of Oracle Real Application Testing.”

    Read the article

  • Oracle Enterprise Manager 12c(EM12c):????????? ~Exadata??·??~

    - by Kumiko Fujita
    EM?????Exadata?????? Oracle Exadata???????????????????????????Oracle Enterprise Manager 12c????????????Oracle Exadata??????????(Oracle Enterprise Manager 11g)????????????12c???????????????????????????Exadata Storage Server?InfiniBand???????????????????????? Exadata??·?? ??????? 1. ???????????? -Exadata??????????????????????????!- Oracle Enterprise Manager 12c???Oracle Exadata???/????????????????????????????????????????????????????????????????????????????????????????????????? 2. ?????????????????????? -CPU????I/O?????????!- Oracle Exadata???????? 8 ????96?????????·??????????????TB???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????Oracle Enterprise Manager 12c????????????CPU???????????I/O??????????????????????? 3. ????? -????Exadata????????????!- ????Oracle Exadata?????????????????????????????????????Oracle Enterprise Manager 12c????????????????????????????????·??????????Oracle Exadata??????????????????????????Oracle Enterprise Manager?????????????????????????????????????·???????????????????????Oracle Exadata????????????????????????????? ??????? Storage Server ????????griddisk,celldisk ????FlashCache ???? BIOS,IB??????????DB OS??????OS??????? ??????! ?????Exadata Monitoring?(PDF) ?????????(????????????????) WMV MP4

    Read the article

  • IOUG Enterprise Manager SIG Webinar: WEBINAR: Performance Tuning your Database Cloud in Oracle Enterprise Manager 12c Cloud Control - 360 Degrees

    - by Patrick Rood
    October 25, 2013 EM 12c Sales Blast | IOUG Enterprise Manager SIG WEBINAR: Performance Tuning your Database Cloud in Oracle Enterprise Manager 12c Cloud Control - 360 Degrees Last year, the Independent Oracle User Group (IOUG) established a fast-growing Special Interest Group (SIG) devoted to Enterprise Manager, and has sponsored Quarterly Newsletters and Webinars about EM. To drive more interest in EM and the SIG, IOUG would like Oracle to invite customers to its latest techcast. Your customers will learn how to leverage Oracle Enterprise Manager 12c for tuning, trouble-shooting and monitoring their Oracle Database Cloud Ecosystem. The session covers lessons learned, tips/tricks, recommendations, best practices, "gotchas" and a whole lot more on how to effectively use Oracle Enterprise Manager 12c Cloud Control for quick, easy and intuitive performance tuning of an Oracle Database Cloud. Session Objectives: • Leveraging Enterprise Manager 12c Cloud Control for Oracle Database Tuning/Monitoring • Limited Deep-Dive on Automatic Workload Repository (AWR) • Oracle Database Cloud Performance Tuning • Best Practices for Database Cloud Maintenance and Monitoring Featured Speaker: Tariq Farooq, CEO, BrainSurface and Mike Ault Date & Time: Wednesday, October 30 12:00 PM- 1:00 PM Central Time (USA) Register Here 

    Read the article

  • Is It possible to use the second part of this code for repository patterns and generics

    - by newToCSharp
    Is there any issues in using version 2,to get the same results as version 1. Or is this just bad coding. Any Ideas public class Customer { public int CustomerID { get; set; } public string EmailAddress { get; set; } int Age { get; set; } } public interface ICustomer { void AddNewCustomer(Customer Customer); void AddNewCustomer(string EmailAddress, int Age); void RemoveCustomer(Customer Customer); } public class BALCustomer { private readonly ICustomer dalCustomer; public BALCustomer(ICustomer dalCustomer) { this.dalCustomer = dalCustomer; } public void Add_A_New_Customer(Customer Customer) { dalCustomer.AddNewCustomer(Customer); } public void Remove_A_Existing_Customer(Customer Customer) { dalCustomer.RemoveCustomer(Customer); } } public class CustomerDataAccess : ICustomer { public void AddNewCustomer(Customer Customer) { // MAKE DB CONNECTION AND EXECUTE throw new NotImplementedException(); } public void AddNewCustomer(string EmailAddress, int Age) { // MAKE DB CONNECTION AND EXECUTE throw new NotImplementedException(); } public void RemoveCustomer(Customer Customer) { // MAKE DB CONNECTION AND EXECUTE throw new NotImplementedException(); } } // VERSION 2 public class Customer_New : DataRespository<CustomerDataAccess> { public int CustomerID { get; set; } public string EmailAddress { get; set; } public int Age { get; set; } } public class DataRespository<T> where T:class,new() { private T item = new T(); public T Execute { get { return item; } set { item = value; } } public void Update() { //TO BE CODED } public void Save() { //TO BE CODED } public void Remove() { //TO BE CODED } } class Program { static void Main(string[] args) { Customer_New cus = new Customer_New() { Age = 10, EmailAddress = "[email protected]" }; cus.Save(); cus.Execute.RemoveCustomer(new Customer()); // Repository Version Customer customer = new Customer() { EmailAddress = "[email protected]", CustomerID = 10 }; BALCustomer bal = new BALCustomer(new CustomerDataAccess()); bal.Add_A_New_Customer(customer); } } }

    Read the article

  • Implementing a generic repository for WCF data services

    - by cibrax
    The repository implementation I am going to discuss here is not exactly what someone would call repository in terms of DDD, but it is an abstraction layer that becomes handy at the moment of unit testing the code around this repository. In other words, you can easily create a mock to replace the real repository implementation. The WCF Data Services update for .NET 3.5 introduced a nice feature to support two way data bindings, which is very helpful for developing WPF or Silverlight based application but also for implementing the repository I am going to talk about. As part of this feature, the WCF Data Services Client library introduced a new collection DataServiceCollection<T> that implements INotifyPropertyChanged to notify the data context (DataServiceContext) about any change in the association links. This means that it is not longer necessary to manually set or remove the links in the data context when an item is added or removed from a collection. Before having this new collection, you basically used the following code to add a new item to a collection. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; var context = new OrderContext(); context.AddToOrders(order); context.AddToOrderItems(item); context.SetLink(item, "Order", order); context.SaveChanges(); Now, thanks to this new collection, everything is much simpler and similar to what you have in other ORMs like Entity Framework or L2S. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; order.Items.Add(item); var context = new OrderContext(); context.AddToOrders(order); context.SaveChanges(); In order to use this new feature, you first need to enable V2 in the data service, and then use some specific arguments in the datasvcutil tool (You can find more information about this new feature and how to use it in this post). DataSvcUtil /uri:"http://localhost:3655/MyDataService.svc/" /out:Reference.cs /dataservicecollection /version:2.0 Once you use those two arguments, the generated proxy classes will use DataServiceCollection<T> rather than a simple ObjectCollection<T>, which was the default collection in V1. There are some aspects that you need to know to use this feature correctly. 1. All the entities retrieved directly from the data context with a query track the changes and report those to the data context automatically. 2. A entity created with “new” does not track any change in the properties or associations. In order to enable change tracking in this entity, you need to do the following trick. public Order CreateOrder() {   var collection = new DataServiceCollection<Order>(this.context);   var order = new Order();   collection.Add(order);   return order; } You basically need to create a collection, and add the entity to that collection with the “Add” method to enable change tracking on that entity. 3. If you need to attach an existing entity (For example, if you created the entity with the “new” operator rather than retrieving it from the data context with a query) to a data context for tracking changes, you can use the “Load” method in the DataServiceCollection. var order = new Order {   Id = 1 }; var collection = new DataServiceCollection<Order>(this.context); collection.Load(order); In this case, the order with Id = 1 must exist on the data source exposed by the Data service. Otherwise, you will get an error because the entity did not exist. These cool extensions methods discussed by Stuart Leeks in this post to replace all the magic strings in the “Expand” operation with Expression Trees represent another feature I am going to use to implement this generic repository. Thanks to these extension methods, you could replace the following query with magic strings by a piece of code that only uses expressions. Magic strings, var customers = dataContext.Customers .Expand("Orders")         .Expand("Orders/Items") Expressions, var customers = dataContext.Customers .Expand(c => c.Orders.SubExpand(o => o.Items)) That query basically returns all the customers with their orders and order items. Ok, now that we have the automatic change tracking support and the expression support for explicitly loading entity associations, we are ready to create the repository. The interface for this repository looks like this,public interface IRepository { T Create<T>() where T : new(); void Update<T>(T entity); void Delete<T>(T entity); IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties); IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties); void Attach<T>(T entity); void SaveChanges(); } The Retrieve and RetrieveAll methods are used to execute queries against the data service context. While both methods receive an array of expressions to load associations explicitly, only the Retrieve method receives a predicate representing the “where” clause. The following code represents the final implementation of this repository.public class DataServiceRepository: IRepository { ResourceRepositoryContext context; public DataServiceRepository() : this (new DataServiceContext()) { } public DataServiceRepository(DataServiceContext context) { this.context = context; } private static string ResolveEntitySet(Type type) { var entitySetAttribute = (EntitySetAttribute)type.GetCustomAttributes(typeof(EntitySetAttribute), true).FirstOrDefault(); if (entitySetAttribute != null) return entitySetAttribute.EntitySet; return null; } public T Create<T>() where T : new() { var collection = new DataServiceCollection<T>(this.context); var entity = new T(); collection.Add(entity); return entity; } public void Update<T>(T entity) { this.context.UpdateObject(entity); } public void Delete<T>(T entity) { this.context.DeleteObject(entity); } public void Attach<T>(T entity) { var collection = new DataServiceCollection<T>(this.context); collection.Load(entity); } public IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query.Where(predicate); } public IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query; } public void SaveChanges() { this.context.SaveChanges(SaveChangesOptions.Batch); } } For instance, you can use the following code to retrieve customers with First name equal to “John”, and all their orders in a single call. repository.Retrieve<Customer>(    c => c.FirstName == “John”, //Where    c => c.Orders.SubExpand(o => o.Items)); In case, you want to have some pre-defined queries that you are going to use across several places, you can put them in an specific class. public static class CustomerQueries {   public static Expression<Func<Customer, bool>> LastNameEqualsTo(string lastName)   {     return c => c.LastName == lastName;   } } And then, use it with the repository. repository.Retrieve<Customer>(    CustomerQueries.LastNameEqualsTo("foo"),    c => c.Orders.SubExpand(o => o.Items));

    Read the article

  • Storing SCA Metadata in the Oracle Metadata Services Repository by Nicolás Fonnegra Martinez and Markus Lohn

    - by JuergenKress
    The advantages of using the Oracle Metadata Services Repository as a central storage for the metadata. SCA has been available since the release of the Oracle SOA Suite 11g. This technology combines and orchestrates several SOA components inside an SCA composite, making design, development, deployment, and maintenance easier. SCA development is metadata-driven, meaning that metadata artifacts, such as Web Services Description Language (WSDL), XML Schema Definition (XSD), XML, others, define the composite's behavior. With the increased number of composites and the dependencies among them, it became necessary to manage all the metadata in an adequate way. This article will address the advantages of using the Oracle Metadata Services (MDS) repository as a central storage for the metadata. The MDS repository is a central part of the Oracle Fusion Middleware landscape, managing the metadata for several technologies, such as Oracle Application Development Framework (Oracle ADF), Oracle WebCenter, and the Oracle SOA Suite. This article is divided into three parts. The first part provides an overview of SCA and MDS. The second part describes some MDS tasks that help in the management of the SCA metadata files inside the repository. The third part shows how to develop SCA composites in combination with an MDS repository. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SCA Metadata. Metadata Services Repository,Nicolás Fonnegra Martinez,Markus Lohn,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to manage security of these self hosted web apis, to ensure that the request coming for accessing data is authenticated?

    - by Husrat Mehmood
    Let's pretend I am going to work on an enterprise application. Say I have 11 modules in the application and I would have to develop Dashboards for every role in the organization for whom I are going to develop application. We Decided to use Asp.Net Web Api and return json data from our apis. We are going to include 11 Self hosted web apis projects in our application (one self hosted web api) for every module. All 11 modules are connected to one Sql server 2012 Database. Then once api is ready we would have to create Business Dashboards (Based upon roles in Organization). So Now my web api client is Asp.Net Mvc application.Asp.Net mvc will consume those web apis. Here is the part for whom all explanation is done. How should I manage Security of all 11 self hosted web apis? How should I only authenticated request is coming? If I authenticate user by login and password and then redirect user to appropriate Dashboard designed for the role that user have and load data by consuming web apis. How should I ensure that the request coming for accessing data is authenticated?

    Read the article

  • Telesharp – An Application Repository for .NET applications

    - by cibrax
    A year ago, we released SO-Aware as our first product in Tellago Studios. SO-Aware represented a new way to manage web services and all the related artifacts like configuration, tests or monitoring data in the Microsoft stack. It was based on the idea of using a lightweight SOA governance approach with a central repository exposed through RESTful services. At that point, we thought the same idea could be extended to enterprise applications in general by providing a generic repository for many of the runtime or design time artifacts generated during the development like configuration, application description or topology (a high level view of the components that made up a system), logging information or binaries. It took us several months to give a form to that idea and implement it as a product, but it is finally here and I am very proud to announce the release today under the name of “TeleSharp”. Telesharp provides in a nutshell the following features, 1. Configure your application topology in a central repository. Application topology in this context means that you can decompose your application and describe it in terms of components and how they interact each other. For example, you can tell that the CRM system is made up of a couple of WCF services and a ASP.NET MVC front end. 2. Centralize configuration for your applications and components.  You can import existing .NET configuration sections into the repository and associate them to the different components. In addition, environment overrides are supported for the configuration sections. We provide tooling and extensions in Visual Studio for managing all the configuration, and a set of powershell commands for automating the configuration deployment. 3. Browse all the assemblies and types remotely in your application servers in a web browser using an interface similar to any of the existing .NET reflection tools. You can easily determine this way whether the server is running the correct version of your applications. 4. Centralize logging and exception management into the repository. You get different reports and a pivot viewer experience for browsing all the different logging information generated by your applications. In addition, TeleSharp provides different providers for pushing the logging information to the central repository using well-known frameworks like ELMAH, Log4Net, EntLib or even Windows ETW.  The central repository itself is implemented as a set of OData services that any application can easily consume using regular Http. You can read more details in this introductory post If you think this product can be a good fit in your organization, you can request a trial version in our Tellago Studios website.

    Read the article

  • Building a Solaris 11 repository without network connection

    - by user12611852
    Solaris 11 has been released and is a fantastic new iteration of Oracle's rock solid, enterprise operating system.  One of the great new features is the repository based Image Packaging system.  IPS not only introduces new cloud based package installation services, it is also integrated with our zones, boot environment and ZFS file systems to provide a safe, easy and fast way to perform system updates. My customers typically don't have network access and, in fact, can't connect to any network until they have "Authority to connect."  It's useful, however, to build up a Solaris 11 system with additional software using the new Image Packaging System and locally stored repository. The Solaris 11 documentation describes how to create a locally stored repository with full explanations of what the commands do. I'm simply providing the quick and dirty steps.  The easiest way is to download the ISO image, burn to a DVD and insert into your DVD drive.  Then as root: pkg set-publisher -G '*' -g file:///cdrom/sol11repo_full/repo solaris Now you can to install software using the GUI package manager or the pkg commands.  If you would like something more permanent (or don't have a DVD drive), however, it takes a little more work. After installing Solaris 11, download (on another system perhaps) the two files that make up the Solaris 11 repository from our download site Sneaker-net the files to your Solaris 11 system Unzip and cat the two files together to create one large ISO image. The file is about 6.9 GB in size zfs create rpool/export/repoSolaris11 zfs set atime=off rpool/export/repoSolaris11 zfs set compression=on rpool/export/repoSolaris11 (save some space) lofiadm -a sol-11-1111-repo-full.iso /dev/lofi/1 mount -F hsfs /dev/lofi/1 /mnt You could stop here and set the publisher to point to the /mnt/repo location, however, this mount will not be persistent across reboots. Copy the repository from the mounted ISO image to a permanent, on disk location. rsync -aP /mnt/repo /export/repoSolaris11 pkgrepo -s /export/repoSolaris11 refresh pkg set-publisher -G '*' -g /export/repoSolaris11/repo solaris You now have a locally installed repository for adding additional software packages for Solaris 11.  The documentation also takes you through publishing your repository on the network so that others can access it.

    Read the article

  • Building an Infrastructure Cloud with Oracle VM for x86 + Enterprise Manager 12c

    - by Richard Rotter
    Cloud Computing? Everyone is talking about Cloud these days. Everyone is explaining how the cloud will help you to bring your service up and running very fast, secure and with little effort. You can find these kinds of presentations at almost every event around the globe. But what is really behind all this stuff? Is it really so simple? And the answer is: Yes it is! With the Oracle SW Stack it is! In this post, I will try to bring this down to earth, demonstrating how easy it could be to build a cloud infrastructure with Oracle's solution for cloud computing.But let me cover some basics first: How fast can you build a cloud?How elastic is your cloud so you can provide new services on demand? How much effort does it take to monitor and operate your Cloud Infrastructure in order to meet your SLAs?How easy is it to chargeback for your services provided? These are the critical success factors of Cloud Computing. And Oracle has an answer to all those questions. By using Oracle VM for X86 in combination with Enterprise Manager 12c you can build and control your cloud environment very fast and easy. What are the fundamental building blocks for your cloud? Oracle Cloud Building Blocks #1 Hardware Surprise, surprise. Even the cloud needs to run somewhere, hence you will need hardware. This HW normally consists of servers, storage and networking. But Oracles goes beyond that. There are Optimized Solutions available for your cloud infrastructure. This is a cookbook to build your HW cloud platform. For example, building your cloud infrastructure with blades and our network infrastructure will reduce complexity in your datacenter (Blades with switch network modules, splitter cables to reduce the amount of cables, TOR (Top Of the Rack) switches which are building the interface to your infrastructure environment. Reducing complexity even in the cabling will help you to manage your environment more efficient and with less risk. Of course, our engineered systems fit into the cloud perfectly too. Although they are considered as a PaaS themselves, having the database SW (for Exadata) and the application development environment (for Exalogic) already deployed on them, in general they are ideal systems to enable you building your own cloud and PaaS infrastructure. #2 Virtualization The next missing link in the cloud setup is virtualization. For me personally, it's one of the most hidden "secret", that oracle can provide you with a complete virtualization stack in terms of a hypervisor on both architectures: X86 and Sparc CPUs. There is Oracle VM for X86 and Oracle VM for Sparc available at no additional  license costs if your are running this virtualization stack on top of Oracle HW (and with Oracle Premier Support for HW). This completes the virtualization portfolio together with Solaris Zones introduced already with Solaris 10 a few years ago. Let me explain how Oracle VM for X86 works: Oracle VM for x86 consists of two main parts: - The Oracle VM Server: Oracle VM Server is installed on bare metal and it is the hypervisor which is able to run virtual machines. It has a very small footprint. The ISO-Image of Oracle VM Server is only 200MB large. It is very small but efficient. You can install a OVM-Server in less than 5 mins by booting the Server with the ISO-Image assigned and providing the necessary configuration parameters (like installing an Linux distribution). After the installation, the OVM-Server is ready to use. That's all. - The Oracle VM-Manager: OVM-Manager is the central management tool where you can control your OVM-Servers. OVM-Manager provides the graphical user interface, which is an Application Development Framework (ADF) application, with a familiar web-browser based interface, to manage Oracle VM Servers, virtual machines, and resources. The Oracle VM Manager has the following capabilities: Create virtual machines Create server pools Power on and off virtual machines Manage networks and storage Import virtual machines, ISO files, and templates Manage high availability of Oracle VM Servers, server pools, and virtual machines Perform live migration of virtual machines I want to highlight one of the goodies which you can use if you are running Oracle VM for X86: Preconfigured, downloadable Virtual Machine Templates form edelivery With these templates, you can download completely preconfigured Virtual Machines in your environment, boot them up, configure them at first time boot and use it. There are templates for almost all Oracle SW and Applications (like Fusion Middleware, Database, Siebel, etc.) available. #3) Cloud Management The management of your cloud infrastructure is key. This is a day-to-day job. Acquiring HW, installing a virtualization layer on top of it is done just at the beginning and if you want to expand your infrastructure. But managing your cloud, keeping it up and running, deploying new services, changing your chargeback model, etc, these are the daily jobs. These jobs must be simple, secure and easy to manage. The Enterprise Manager 12c Cloud provides this functionality from one management cockpit. Enterprise Manager 12c uses Oracle VM Manager to control OVM Serverpools. Once you registered your OVM-Managers in Enterprise Manager, then you are able to setup your cloud infrastructure and manage everything from Enterprise Manager. What you need to do in EM12c is: ">Register your OVM Manager in Enterprise ManagerAfter Registering your OVM Manager, all the functionality of Oracle VM for X86 is also available in Enterprise Manager. Enterprise Manager works as a "Manger" of the Manager. You can register as many OVM-Managers you want and control your complete virtualization environment Create Roles and Users for your Self Service Portal in Enterprise ManagerWith this step you allow users to logon on the Enterprise Manager Self Service Portal. Users can request Virtual Machines in this portal. Setup the Cloud InfrastructureSetup the Quotas for your self service users. How many VMs can they request? How much of your resources ( cpu, memory, storage, network, etc. etc.)? Which SW components (templates, assemblys) can your self service users request? In this step, you basically set up the complete cloud infrastructure. Setup ChargebackOnce your cloud is set up, you need to configure your chargeback mechanism. The Enterprise Manager collects the resources metrics, which are used in a very deep level. Almost all collected Metrics could be used in the chargeback module. You can define chargeback plans based on configurations (charge for the amount of cpu, memory, storage is assigned to a machine, or for a specific OS which is installed) or chargeback on resource consumption (% of cpu used, storage used, etc). Or you can also define a combination of configuration and consumption chargeback plans. The chargeback module is very flexible. Here is a overview of the workflow how to handle infrastructure cloud in EM: Summary As you can see, setting up an Infrastructure Cloud Service with Oracle VM for X86 and Enterprise Manager 12c is really simple. I personally configured a complete cloud environment with three X86 servers and a small JBOD san box in less than 3 hours. There is no magic in it, it is all straightforward. Of course, you have to have some experience with Oracle VM and Enterprise Manager. Experience in setting up Linux environments helps as well. I plan to publish a technical cookbook in the next few weeks. I hope you found this post useful and will see you again here on our blog. Any hints, comments are welcome!

    Read the article

  • Oracle VM repository creation seems contradictory to its server pool?

    - by Michael
    I found something contradictory in Oracle VM. Clustered server pool creation in Oracle VM would format my FC LUN as ocfs2 , and start o2cb & ocfs2 services to build cluster environment. After that, when I wanted to create repository on the serverpool, unexpectedly, it told me that the physical disk I chose which is also my FC LUN, already contains a file system. What a contradictory! So what, delete the file system in serverpool? If so, why created it before?! OVM> list physicaldisk Command: list physicaldisk Status: Success Time: 2012-09-10 06:44:42.660 Data: id:0004fb00001800007765e62381895f61 name:OVM_HDS OVM> create serverpool clusterenable=true virtualip=10.84.21.123 physicaldisk=OVM_HDS name=ovmserverpool Serverpool creation took quite a long time since my FC LUN was big. When the creation completed, my FC LUN was created as ocfs2 and o2cb & ocfs2 services were started on my ovm servers successfully. But then repository creation indeed throws me a big surprise ... OVM> create repository serverpool=ovmserverpool physicaldisk=OVM_HDS name=ovmrepo Command: create repository serverpool=ovmserverpool physicaldisk=OVM_HDS name=ovmrepo Status: Failure Time: 2012-09-10 06:23:44.656 Error Msg: com.oracle.ovm.mgr.api.exception.RuleException: OVMRU_002026E Cannot use or delete physical disk: OVM_HDS, it already contains a file system: [Pool filesystem for ovmserverpool] Mon Sep 10 06:23:44 CST 2012 What should I do now? Delete the filesystem using dd command? That would destroy the serverpool, right? I'm really confused. My OVM Manager version is 3.1.1.399 which is the latest. Any tips are appreciated. Thanks.

    Read the article

  • strange bundler error: tar_input.rb:49:in `initialize': not in gzip format (Zlib::GzipFile::Error) o

    - by z3cko
    i am getting a strange bundler error when running bundle pack with bundler 0.9.12 any ideas? (see pastie for a better formatted code: http://pastie.org/881328 ) /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/package/tar_input.rb:49:in `initialize': not in gzip format (Zlib::GzipFile::Error) from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/package/tar_input.rb:49:in `new' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/package/tar_input.rb:49:in `initialize' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/package/tar_reader.rb:63:in `each' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/package/tar_reader.rb:54:in `loop' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/package/tar_reader.rb:54:in `each' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/package/tar_input.rb:32:in `initialize' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/package/tar_input.rb:17:in `new' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/package/tar_input.rb:17:in `open' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/package.rb:55:in `open' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/format.rb:63:in `from_io' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/format.rb:51:in `from_file_by_path' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/format.rb:50:in `open' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/site_ruby/1.8/rubygems/format.rb:50:in `from_file_by_path' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/source.rb:115:in `specs' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/source.rb:114:in `each' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/source.rb:114:in `specs' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/index.rb:32:in `from_cached_specs' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/index.rb:23:in `application_cached_gems' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/index.rb:15:in `cached_gems' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/index.rb:5:in `build' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/index.rb:14:in `cached_gems' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/environment.rb:15:in `index' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/index.rb:5:in `build' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/environment.rb:13:in `index' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/runtime.rb:86:in `specs' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/runtime.rb:130:in `details' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/runtime.rb:119:in `write_yml_lock' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/runtime.rb:65:in `lock' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/cli.rb:89:in `lock' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/cli.rb:131:in `package' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/vendor/thor/task.rb:33:in `send' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/vendor/thor/task.rb:33:in `run' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/vendor/thor/invocation.rb:109 from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/vendor/thor/invocation.rb:116:in `call' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/vendor/thor/invocation.rb:116:in `invoke' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/vendor/thor.rb:137:in `start' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/vendor/thor/base.rb:378:in `start' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/lib/bundler/vendor/thor.rb:124:in `start' from /opt/ruby-enterprise-1.8.7-2010.01/lib/ruby/gems/1.8/gems/bundler-0.9.12/bin/bundle:11 from /opt/REE/bin/bundle:19:in `load' from /opt/REE/bin/bundle:19

    Read the article

  • How to Set Up Your Enterprise Social Organization

    - by Mike Stiles
    The rush for business organizations to establish, grow, and adopt social was driven out of necessity and inevitability. The result, however, was a sudden, booming social presence creating touch points with customers, partners and influencers, but without any corporate social organization or structure in place to effectively manage it. Even today, many business leaders remain uncertain as to how to corral this social media thing so that it makes sense for their enterprise. Imagine their panic when they hear one of the most beneficial approaches to corporate use of social involves giving up at least some hierarchical control and empowering employees to publicly engage customers. And beyond that, they should also be empowered, regardless of their corporate status, to engage and collaborate internally, spurring “off the grid” innovation. An HBR blog points out that traditionally, enterprise organizations function from the top down, and employees work end-to-end, structured around business processes. But the social enterprise opens up structures that up to now have not exactly been embraced by turf-protecting executives and managers. The blog asks, “What if leaders could create a future where customers, associates and suppliers are no longer seen as objects in the system but as valued sources of innovation, ideas and energy?” What if indeed? The social enterprise activates internal resources without the usual obsession with position. It is the dawn of mass collaboration. That does not, however, mean this mass collaboration has to lead to uncontrolled chaos. In an extended interview with Oracle, Altimeter Group analyst Jeremiah Owyang and Oracle SVP Reggie Bradford paint a complete picture of today’s social enterprise, including internal organizational structures Altimeter Group has seen emerge. One sign of a mature social enterprise is the establishing of a social Center of Excellence (CoE), which serves as a hub for high-level social strategy, training and education, research, measurement and accountability, and vendor selection. This CoE is led by a corporate Social Strategist, most likely from a Marketing or Corporate Communications background. Reporting to them are the Community Managers, the front lines of customer interaction and engagement; business unit liaisons that coordinate the enterprise; and social media campaign/product managers, social analysts, and developers. With content rising as the defining factor for social success, Altimeter also sees a Content Strategist position emerging. Across the enterprise, Altimeter has seen 5 organizational patterns. Watching the video will give you the pros and cons of each. Decentralized - Anyone can do anything at any time on any social channel. Centralized – One central groups controls all social communication for the company. Hub and Spoke – A centralized group, but business units can operate their own social under the hub’s guidance and execution. Most enterprises are using this model. Dandelion – Each business unit develops their own social strategy & staff, has its own ability to deploy, and its own ability to engage under the central policies of the CoE. Honeycomb – Every employee can do social, but as opposed to the decentralized model, it’s coordinated and monitored on one platform. The average enterprise has a whopping 178 social accounts, nearly ¼ of which are usually semi-idle and need to be scrapped. The last thing any C-suite needs is to cope with fragmented technologies, solutions and platforms. It’s neither scalable nor strategic. The prepared, effective social enterprise has a technology partner that can quickly and holistically integrate emerging platforms and technologies, such that whatever internal social command structure you’ve set up can continue efficiently executing strategy without skipping a beat. @mikestiles

    Read the article

  • Error connecting to online fossil repository after changing password.

    - by Toby Allen
    I set up a fossil repository on a shared hosting account I have. I created a perl script fossil.pl that points to a cloned repository that I put up on the webspace. I set all the correct permissions (755). When I go to fossil.pl I get the web ui. Everythings cool. However I'm having a problem with pushes and hoping someone could point me to a solution. When I clone a repository it sets a new password for me (Toby) in the new cloned repository. If I push to this repository online without changing the password it works fine, I can push up changes from my local machine to the online repository. However once I change the password for Toby (to something more easily remembered by me) I get the following error. Bytes Cards Artifacts Deltas Send: 1810 9 0 2 1Server Error: not authorized to write fossil: server says: not authorized to write Anyone know why this is happening? Anyone know how to fix it?

    Read the article

  • How to Set Up Your Enterprise Social Organization?

    - by Richard Lefebvre
    By Mike Stiles on Dec 04, 2012 The rush for business organizations to establish, grow, and adopt social was driven out of necessity and inevitability. The result, however, was a sudden, booming social presence creating touch points with customers, partners and influencers, but without any corporate social organization or structure in place to effectively manage it. Even today, many business leaders remain uncertain as to how to corral this social media thing so that it makes sense for their enterprise. Imagine their panic when they hear one of the most beneficial approaches to corporate use of social involves giving up at least some hierarchical control and empowering employees to publicly engage customers. And beyond that, they should also be empowered, regardless of their corporate status, to engage and collaborate internally, spurring “off the grid” innovation. An HBR blog points out that traditionally, enterprise organizations function from the top down, and employees work end-to-end, structured around business processes. But the social enterprise opens up structures that up to now have not exactly been embraced by turf-protecting executives and managers. The blog asks, “What if leaders could create a future where customers, associates and suppliers are no longer seen as objects in the system but as valued sources of innovation, ideas and energy?” What if indeed? The social enterprise activates internal resources without the usual obsession with position. It is the dawn of mass collaboration. That does not, however, mean this mass collaboration has to lead to uncontrolled chaos. In an extended interview with Oracle, Altimeter Group analyst Jeremiah Owyang and Oracle SVP Reggie Bradford paint a complete picture of today’s social enterprise, including internal organizational structures Altimeter Group has seen emerge. One sign of a mature social enterprise is the establishing of a social Center of Excellence (CoE), which serves as a hub for high-level social strategy, training and education, research, measurement and accountability, and vendor selection. This CoE is led by a corporate Social Strategist, most likely from a Marketing or Corporate Communications background. Reporting to them are the Community Managers, the front lines of customer interaction and engagement; business unit liaisons that coordinate the enterprise; and social media campaign/product managers, social analysts, and developers. With content rising as the defining factor for social success, Altimeter also sees a Content Strategist position emerging. Across the enterprise, Altimeter has seen 5 organizational patterns. Watching the video will give you the pros and cons of each. Decentralized - Anyone can do anything at any time on any social channel. Centralized – One central groups controls all social communication for the company. Hub and Spoke – A centralized group, but business units can operate their own social under the hub’s guidance and execution. Most enterprises are using this model. Dandelion – Each business unit develops their own social strategy & staff, has its own ability to deploy, and its own ability to engage under the central policies of the CoE. Honeycomb – Every employee can do social, but as opposed to the decentralized model, it’s coordinated and monitored on one platform. The average enterprise has a whopping 178 social accounts, nearly ¼ of which are usually semi-idle and need to be scrapped. The last thing any C-suite needs is to cope with fragmented technologies, solutions and platforms. It’s neither scalable nor strategic. The prepared, effective social enterprise has a technology partner that can quickly and holistically integrate emerging platforms and technologies, such that whatever internal social command structure you’ve set up can continue efficiently executing strategy without skipping a beat. @mikestiles

    Read the article

  • Orchestrating the Virtual Enterprise, Part I

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's Chief Sustainability Officer & Vice President, SCM Product Strategy During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems. In my next post, I will share examples of companies that have made that shift and talk more about the distributed orchestration process.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >