Search Results

Search found 28875 results on 1155 pages for 'oracle event'.

Page 380/1155 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Universities 2030: Learning from the Past to Anticipate the Future

    - by Mohit Phogat
    What will the landscape of international higher education look like a generation from now? What challenges and opportunities lie ahead for universities, especially “global” research universities? And what can university leaders do to prepare for the major social, economic, and political changes—both foreseen and unforeseen—that may be on the horizon? The nine essays in this collection proceed on the premise that one way to envision “the global university” of the future is to explore how earlier generations of university leaders prepared for “global” change—or at least responded to change—in the past. As the essays in this collection attest, many of the patterns associated with contemporary “globalization” or “internationalization” are not new; similar processes have been underway for a long time (some would say for centuries).[1] A comparative-historical look at universities’ responses to global change can help today’s higher-education leaders prepare for the future. Written by leading historians of higher education from around the world, these nine essays identify “key moments” in the internationalization of higher education: moments when universities and university leaders responded to new historical circumstances by reorienting their relationship with the broader world. Covering more than a century of change—from the late nineteenth century to the early twenty-first—they explore different approaches to internationalization across Europe, Asia, Australia, North America, and South America. Notably, while the choice of historical eras was left entirely open, the essays converged around four periods: the 1880s and the international extension of the “modern research university” model; the 1930s and universities’ attempts to cope with international financial and political crises; the 1960s and universities’ role in an emerging postcolonial international development apparatus; and the 2000s and the rise of neoliberal efforts to reform universities in the name of international economic “competitiveness.” Each of these four periods saw universities adopt new approaches to internationalization in response to major historical-structural changes, and each has clear parallels to today. Among the most important historical-structural challenges that universities confronted were: (1) fluctuating enrollments and funding resources associated with global economic booms and busts; (2) new modes of transportation and communication that facilitated mobility (among students, scholars, and knowledge itself); (3) increasing demands for applied science, technical expertise, and commercial innovation; and (4) ideological reconfigurations accompanying regime changes (e.g., from one internal regime to another, from colonialism to postcolonialism, from the cold war to globalized capitalism, etc.). Like universities today, universities in the past responded to major historical-structural changes by internationalizing: by joining forces across space to meet new expectations and solve problems on an ever-widening scale. Approaches to internationalization have typically built on prior cultural or institutional ties. In general, only when the benefits of existing ties had been exhausted did universities reach out to foreign (or less familiar) partners. As one might expect, this process of “reaching out” has stretched universities’ traditional cultural, political, and/or intellectual bonds and has invariably presented challenges, particularly when national priorities have differed—for example, with respect to curricular programs, governance structures, norms of academic freedom, etc. Strategies of university internationalization that either ignore or downplay cultural, political, or intellectual differences often fail, especially when the pursuit of new international connections is perceived to weaken national ties. If the essays in this collection agree on anything, they agree that approaches to internationalization that seem to “de-nationalize” the university usually do not succeed (at least not for long). Please continue reading the other essays at http://globalhighered.wordpress.com/

    Read the article

  • When is the default storage rule not really the default storage rule?

    - by Kevin Smith
    In 11g WebCenter Content (WCC) introduced dispersion rules in the vault and weblayout directory paths to better distribute content across the directories. The dispersion rule was based on dRevClassID. The only problem with this is that dRevClassID did not remain the same when you copied content from one WCC instance to another using Archiver like in a contribution-consumption scenario. This could cause problems because the web-viewable path would not be the same between the contribution and consumption instances. In the PS5 (11.1.1.6.0) release of WCC they addressed this by configuring the File Store Provider (FSP) so that all new content would use a storage rule with a dispersion rule based on dDocName, which would stay the same when content was copied to another WCC instance. To support migration from older versions of WCC they left the default storage rule unchanged and created a new storage rule called DispByContentId and made that the default storage rule for all new content. I only stumbled upon this a while back when I was trying to change the FSP configuration so that all content used a webless storage rule. I changed the default storage rule, restarted WCC, and checked in a new content item. To my surprise the new content was not created as webless. I struggled with this for a while until I noticed there were multiple storage rules defined in the FSP configuration. When I looked at the default value for the xStorageRule field in Configuration Manager, sure enough it was no longer default, but was now DispByContentId. Once I updated the DispByContentId storage rule to webless and restarted WCC all my new content was now created using the webless storage rule, just like I wanted. I noticed when I was creating this blog post that the default storage rule is also listed on the File Store Provider Information page, but I guess I didn't see that when I originally did this.

    Read the article

  • MySQL for Beginners course - first steps to lowering your Database TCOs

    - by Antoinette O'Sullivan
    Thinking about lowering your Database TCO by using the MySQL Server? Don't miss the chance to get training from the source! With the newly released MySQL for Beginners class, learn how this powerful relational database management system can make your life easier and more fun! This course covers all the basics and will get you on your way, with a solid foundation. This instructor led, hands-on class covers the fundamentals of SQL and relational databases, using MySQL as a teaching tool. Send information about this course release to a friend who might be considering getting started on the world's most popular small footprint database.

    Read the article

  • Improving the performance of JDeveloper11g (part 2) and JVMs in general

    - by asantaga
    Just received an email from one of our JVM developers who read my blog entry on Performance tuning JDeveloper11g and he's confirmed that all of the above parameters are totally supported :-) He's also provided a description of the parameters so we can learn what magic is actually being applied. - -XX:+AggressiveOpts -- this enables the latest and greatest JVM optimizations. It will likely help most Java applications. It's fully supported. The downside of it is that because it has the latest and greatest optimizations, there is some small probability that it may not offer as good of an experience. As those features enabled with this command line option have "matured", they are made the default in a future JDK release. So, you can think of this command line option as the place where the newest optimizations get introduced. Some time later they are moved out from under AggressiveOpts to become default behavior. -XX:+OptimizeStringConcat -- only works with the -server JVM. It may be enabled by the default in a future JDK 7 update release. This option delays the construction of a StringBuilder/StringBuffer and attempts to avoid re-sizing the underlying char[] by attempting to detect the size of the char[] to allocate based on what's being appended to the StringBuilder/StringBuffer. -XX:+UseStringCache -- I would not suggest using this unless you knew that JDeveloper allocated the same string over and over again. And, the string that's allocated over and over again is one of the first 100,000 allocated strings. In short, I'd recommend against using it. And, in fact, in Java 7 (currently) does not include this feature. -XX:+UseCompressedOops -- applicable to 64-bit JVMs. And, if you're using a 64-bit JVM, I'd suggest you use it. It's auto enabled in JDK 7 64-bit JVMs and later JDK 6 64-bit JVMs enable it by default too. -XX:+UseGCOverheadLimit -- by default this option is already enabled. One other command line option to consider is -XX:+TieredCompilation for a JDK 6 Update 25 or later, or JDK 7. This gives you the startup of a -client JVM and the peak performance of a -server JVM. Awesome-ness!  Finally, Charlies also pointed out to me a "new" book he's just published where he goes into the details of JVM tuning, a must for all Fusion Middleware tuning exercises..  (click the book)  Thanks Charlie!

    Read the article

  • How To: Using SimpleMembserhipProvider with MySql Connector/Net.

    - by Francisco Tirado
    Now on Connector/Net 6.9 the users will have the ability to use SimpleMembership Provider on MVC4 templates. The configuration is very simple and also have compatibility with OAuth, in this post we'll explain step by step how to configure it in a MVC 4 Web Application. Requirements  The requirements to use SimpleMembership with Connector/Net are: Install Connector/Net 6.9, or download the No Install version. Net Framework 4.0 or greater. MVC 4  Visual Studio 2012 or newer version Creating and configuring a new project In this example we'll use VS2012 to create the project basis on the Internet Aplication template and using Entity Framework to manage the User model. Open VS 2012 and create a new project, we'll create a new MVC 4 Web Application and configure the project to use Net Framework 4.5. Type a name for the project and then click “Ok”. In the next dialog we'll choose the “Internet Application” template and use Razor as engine without creating a test project. Click “Ok” to continue. Now we have a new project with the templates necessaries to run a Web Application with the default values. We'll use the current files to continue working. If you have installed Connector/Net you can skip this step, if you don't have installed but you're planning to do it, please install it and continue with the next step. If you're using the No Install version of Connector/Net we'll need to add the references to our project, the assemblies needed are: MySql.Data, MySql.Data.Entities and MySql.Web. Be sure that the assemblies chosen match the Net Framework version used in our project and the MySql.Data.Entities is compatible with EF5 (EF5 is the default added by the project). Now open the “web.config” file, and under the <connectionStrings> node add a connection string that points to a MySql instance. We'll use the following connection configuration: <add name="MyConnection" connectionString="server=localhost;UserId=root;password=pass;database=MySqlSimpleMembership;" providerName="MySql.Data.MySqlClient"/> Under the node <system.web> we'll add the following configuration: <membership defaultProvider="MySqlSimpleMembershipProvider"><providers><clear/><add name="MySqlSimpleMembershipProvider" type="MySql.Web.Security.MySqlSimpleMembershipProvider,MySql.Web,Version=6.9.3.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" applicationName="MySqlSimpleMembershipTest" description="MySQLdefaultapplication" connectionStringName="MyConnection"  userTableName="UserProfile" userIdColumn="UserId" userNameColumn="UserName" autoGenerateTables="True"/></providers></membership> In the previous configuration the mandatory properties are: connectionStringName, userTableName, userIdColumn, userNameColumn and autoGenerateTables. If the other properties are not provided a default value is set to it but if the mandatory properties are not set a ProviderException will be thrown. The valid properties for the MySqlSimpleMembership are the same used for MySqlMembership plus the mandatory fields. UserTableName: Name of the table where will be stored the user, this table is independent from the schema generated by the provider and can be edited later by the user. UserId: name of the column that will store the id for the records in the userTableName. UserName : name of the column that will store the name/user for the records in the userTableName. The connectionStringName property must match a connection string defined in web.config file. Once the configuration is done in web.config, we need to be sure that our database context for the Users Table point to the right connection string. In our case we just need to update the class UsersContext in the file AcountModel.cs in the Models folder. The file also contains the UserProfile class which match the configuration for our UserTable. Other class that needs to be updated is the SimpleMembershipInitializer in the file InitializeSimpleMembershipAttribute.cs in the Filters folder. In that class we'll see a call to the method “WebSecurity.InitializeDatabaseConnection”, in that call is where we need to update the parameters to match our configuration. If the database that you configure in your connection string doesn't exists, you need to create it empty. Now we're ready to run our web application, press F5 or the Run button in the tool bar. You'll see the following screen: If you go to your database used by the application you'll see some tables created, now we are using SimpleMembership. Now create a user, click on “Register” at the top-right in the web page. Type your user name and password, then click on “Register”. You'll be redirected to the home page and you'll see the name of your user at the top-right page. If you take a look on the tables just created in your database you will find the data about the user you just register. In our case the tables that contains the information are UserProfile and Webpages_Membership.  Configuring OAuth Other option to access your website will be using OAuth, so you can validate an user using an external account like Facebook, Twitter, Google, etc. In this post we'll enable the authentication for Google account in our application. Go to the class AuthConfig.cs in the folder App_Start. In the method “RegisterAuth” uncomment the last line where is the call to the method “OauthWebSecurity.RegisterGoogleClient”. Run the application. Once the application is running click on “Login”. You will see at the right side the option to login using a Google account, click on “Google”.  You will be asked for Google credentials. If your login is successful you'll see a message asking for your approval to give permission to your site to access your information. Click on “Accept”. Now a page to register your user will be shown, click on “Register”. Now your new user is logged in in your application. You can take a look of the user information created in the tables  UserProfile and Webpages_OauthMembership. If you want to use another external option to authenticate users you must enable the client in the same class where we enable the Google authentication, but for others providers is mandatory to register your Application in their site. Once you have register your application they will give you a token/key and the id for your application, that information you're going to use it to register the client. Thanks for reading.

    Read the article

  • Stakeholder Management in OUM

    - by user719921
    Where is Stakeholder Management in OUM?  Stakeholder Management typically falls into the purview of the Project Manager, which means much of the associated guidance is found in the OUM Manage Focus Area (a.k.a. Manage).  There is no process in Manage named Stakeholder Management, but this “touch point” can be found in a variety of other processes including Bid Transition (BT), Communication Management (CMM) and Organizational Change Management (OCHM). •         Stakeholder management starts in the Bid Transition process with Stakeholder Analysis •         This Stakeholder Analysis is used to build the Project Team Communication Plan in the Communication Management process. •         Stakeholder management should be executed during the Execution and Control phase.  For example, as issues are resolved, the project manager should take the action item to follow up with the affected stakeholders to ensure they are aware that the issue has been resolved. •       The broader topic of Stakeholder management is also addressed very thoroughly in the Organizational Change Management process in the Implement Focus Area, which is a touch point to the Organizational Change Management process in Manage. Check it out and let me know your thoughts!

    Read the article

  • ADF How-To #4: Adding a View Criteria and a Search Panel

    - by Vik Kumar
    In this week's How-To we are explaining how to add a view criteria to VO and then use it to create a Search Panel via customization. The detailed steps can be found here . We have also prepared a video walking you through the steps, available via our Youtube Channel. For any questions or comments, please use the comments section below or visit our OTN forum. We are always looking for topic suggestions for additional How-Tos.

    Read the article

  • Thematic map contd.

    - by jsharma
    The previous post (creating a thematic map) described the use of an advanced style (color ranged-bucket style). The bucket style definition object has an attribute ('classification') which specifies the data classification scheme to use. It's values can be one of {'equal', 'quantile', 'logarithmic', 'custom'}. We use logarithmic in the previous example. Here we'll describe how to use a custom algorithm for classification. Specifically the Jenks Natural Breaks algorithm. We'll use the Javascript implementation in geostats.js The sample code above needs a few changes which are listed below. Include the geostats.js file after or before including oraclemapsv2.js <script src="geostats.js"></script> Modify the bucket style definition to use custom classification Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}    bucketStyleDef = {       numClasses : colorSeries[colorName].classes,       classification: 'custom', //'logarithmic',  // use a logarithmic scale       algorithm: jenksFromGeostats,       styles: theStyles,       gradient:  useGradient? 'linear' : 'off'     }; The function, which implements the custom classification scheme, is specified as the algorithm attribute value. It must accept two input parameters, an array of OM.feature and the name of the feature attribute (e.g. TOTPOP) to use in the classification, and must return an array of buckets (i.e. an array of or OM.style.Bucket  or OM.style.RangedBucket in this case). However the algorithm also needs to know the number of classes (i.e. the number of buckets to create). So we use a global to pass that info in. (Note: This bug/oversight will be fixed and the custom algorithm will be passed 3 parameters: the features array, attribute name, and number of classes). So createBucketColorStyle() has the following changes Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} var numClasses ; function createBucketColorStyle( colorName, colorSeries, rangeName, useGradient) {    var theBucketStyle;    var bucketStyleDef;    var theStyles = [];    //var numClasses ; numClasses = colorSeries[colorName].classes; ... and the function jenksFromGeostats is defined as Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} function jenksFromGeostats(featureArray, columnName) {    var items = [] ; // array of attribute values to be classified    $.each(featureArray, function(i, feature) {         items.push(parseFloat(feature.getAttributeValue(columnName)));    });    // create the geostats object    var theSeries = new geostats(items);    // call getJenks which returns an array of bounds    var theClasses = theSeries.getJenks(numClasses);    if(theClasses)    {     theClasses[theClasses.length-1]=parseFloat(theClasses[theClasses.length-1])+1;    }    else    {     alert(' empty result from getJenks');    }    var theBuckets = [], aBucket=null ;    for(var k=0; k<numClasses; k++)    {             aBucket = new OM.style.RangedBucket(             {low:parseFloat(theClasses[k]),               high:parseFloat(theClasses[k+1])             });             theBuckets.push(aBucket);     }     return theBuckets; } A screenshot of the resulting map with 5 classes is shown below. It is also possible to simply create the buckets and supply them when defining the Bucket style instead of specifying the function (algorithm). In that case the bucket style definition object would be    bucketStyleDef = {      numClasses : colorSeries[colorName].classes,      classification: 'custom',        buckets: theBuckets, //since we are supplying all the buckets      styles: theStyles,      gradient:  useGradient? 'linear' : 'off'    };

    Read the article

  • New Book: "Systems Performance: Enterprise and the Cloud"

    - by uwes
    Brendan Gregg, former Solaris kernel engineer at Sun published his new book "Systems Performance: Enterprise and the Cloud" in October. The book is a modern, very comprehensive guide to general system performance principles and practices, as well as a highly detailed reference for specific UNIX and Linux observability tools used to examine and diagnose operating system behaviour. Read a more detailed abstract and review on Harry J Foxwell's Blog entry "Brendan Gregg's "Systems Performance: Enterprise and the Cloud"

    Read the article

  • Comb Over

    - by Tim Dexter
    Being some what follicly challenged, and to my wife's utter relief, the comb over is not something I have ever considered. The title is a tenuous reference to a formatting feature that Adobe offers in their PDF documents. The comb provides the ability to equally space a string of characters on a pre-defined form layout so that it fits neatly in the area. See the numbers above are being spaced correctly. Its not a function of the font but a property of the form field. For the first time, in a long time I had the chance to build a PDF template today to help out a colleague. I spotted the property and thought, hey, lets give it a whirl and see in Publisher supports it? Low and behold, Publisher handles the comb spacing in its PDF outputs. Exciting eh? OK, maybe not that exciting but I was very pleasantly surprise to see it working. I am reliably informed, by Leslie, BIP Evangelist and Tech Writer that, this feature was introduced from version 10.1.3.4.2 onwards. Official docs and no mention of comb overs here. Happy Combing!

    Read the article

  • Problem with deleting table rows using ctrl+a for row selection

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The following code is commonly shown and documented for how to access the row key of selected table rows in an ADF Faces table configured for multi row selection. public void onRemoveSelectedTableRows(ActionEvent actionEvent) {    RichTable richTable = … get access to your table instance …    CollectionModel cm =(CollectionModel)richTable.getValue();    RowKeySet rowKeySet = (RowKeySet)richTable.getSelectedRowKeys();             for (Object key : rowKeySet) {       richTable.setRowKey(key);       JUCtrlHierNodeBinding rowData = (JUCtrlHierNodeBinding)cm.getRowData();       // do something with rowData e.g.update, print, copy   }    //optional, if you changed data, refresh the table         AdfFacesContext adfFacesContext = AdfFacesContext.getCurrentInstance(); adfFacesContext.addPartialTarget(richTable);   return null; } The code shown above works for 99.5 % of all use cases that deal with multi row selection enabled ADF Faces tables, except for when users use the ctrl+a key to mark all rows for delete. Just to make sure I am clear: if you use ctrl+a to mark rows to perform any other operation on them – like bulk updating all rows for a specific attribute – then this works with the code shown above. Even for bulk row delete, any other mean of row selection (shift+click and multiple ctrl+click) works like a charm and the rows are deleted. So apparently it is the use of ctrl+a that causes the problem when deleting multiple rows of an ADF Faces table. To implement code that works for all table selection use cases, including the one to delete all table rows in one go, you use the code shown below. public void onRemoveSelectedTableRows(ActionEvent actionEvent) {   RichTable richTable = … get access to your table instance …   CollectionModel cm = (CollectionModel)richTable.getValue();   RowKeySet rowKeySet = (RowKeySet)richTable.getSelectedRowKeys();   Object[] rowKeySetArray = rowKeySet.toArray();      for (Object key : rowKeySetArray){               richTable.setRowKey(key);     JUCtrlHierNodeBinding rowData = (JUCtrlHierNodeBinding)cm.getRowData();                              rowData.getRow().remove();   }   AdfFacesContext adfFacesContext = AdfFacesContext.getCurrentInstance();          adfFacesContext.addPartialTarget(richTable); }

    Read the article

  • SOA Suite 11g Developers Cookbook Published

    - by Antony Reynolds
    SOA Suite 11g Developers Cookbook Available Just realized that I failed to mention that Matt & mine’s most recent book, the SOA Suite 11g Developers Cookbook was published over Christmas last year! In some ways this was an easier book to write than the Developers Guide, the hard bit was deciding what recipes to include.  Once we had decided that the writing of the book was pretty straight forward. The book focuses on areas that we felt we had neglected in the Developers Guide, and so there is more about Java integration and OSB, both of which we see a lot of questions about when working with customers. Amazon has a couple of reviews. Table of Contents Chapter 1: Building an SOA Suite ClusterChapter 2: Using the Metadata Service to Share XML ArtifactsChapter 3: Working with TransactionsChapter 4: Mapping DataChapter 5: Composite Messaging PatternsChapter 6: OSB Messaging PatternsChapter 7: Integrating OSB with JSONChapter 8: Compressed File Adapter PatternsChapter 9: Integrating Java with SOA SuiteChapter 10: Securing Composites and Calling Secure Web ServicesChapter 11: Configuring the Identity ServiceChapter 12: Configuring OSB to Use Foreign JMS QueuesChapter 13: Monitoring and Management

    Read the article

  • High Availability documents for OBIA

    - by Lia Nowodworska - Oracle
    There are 2 white papers that have been created by Product Management and Advanced Resolution team (thanks Rajesh, Archana). These documents describe how to deploy a high-availability environment for the Weblogic components of BI Applications 11.1.1.8.1 and 11.1.1.7.1 including the Oracle Data Integrator. New  Configuring High Availability for Oracle Business Intelligence Applications Version 11.1.1.8.1 (Doc ID 1679319.1)  Updated Configuring High Availability for Oracle Business Intelligence Applications Version 11.1.1.7.1 (Doc ID 1587873.1)  When implementing OBIA please take some time to review one or both of the papers. Do you still have a quick question that you want an expert to have a look at: check in the OBIA Community:

    Read the article

  • Translatability Guidelines for Usability Professionals

    - by ultan o'broin
    There is a clearly a demand for translatability guidelines aimed at usability professionals working in the enterprise applications space, judging by Google Analytics and the interest generated in the Twitterverse by my previous post on the subject. So let's continue the conversation. I'll flesh out each of the original points a bit more in posts over the coming weeks. Bear in mind that large-scale enterprise translation is a process. It needs to be scalable, repeatable, maintainable, and above meet the requirements of automation. That doesn't mean the user experience needs to suffer, however. So, stay tuned for some translatability best practices for usability professionals....

    Read the article

  • Open Source Software Development Center at University of Belgrade

    - by Tori Wieldt
    A new Open Source Software Development Center is open at University of Belgrade, Serbia. It centers around using Java & NetBeans as open source projects to learn from and contribute to. Assistant Professor Zoran Sevarac says that not only does the center allow him to teach software development using open source projects, but also "we are improving our University courses based on the experience we get from working on open source code."  Some of the projects underway are a NetBeans UML plugin; Neuroph (a Java neural network framework, with a NetBeans Platform-based UI); a NetBeans DOAP Plugin; WorkieTalkie (NetBeans chat plugin); and 2D and 3D visualization plugins for NetBeans. University of Belgrade also has an official university course about open source development, where students learn to use development tools, work in teams, participate in open source projects and learn from real world software development projects. Students, teachers, and researchers at the University of Belgrade, and any member of the open source community are welcome to come to learn software development from successful open source projects. For more information, you can contact Zoran Sevarac (@neuroph on Twitter).

    Read the article

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • Recordings Available - Features and Functions Accounting Module

    - by MHundal
    Recordings are available to provide a high-level overview of the ETPM Accounting Module.  The Accounting Module includes Financial Transactions, Adjustments, P&I, Waivers, Overpayments, General Ledger Details, etc... The following three recordings contain a presentation with the primary concepts to be covered and then a walk-thru of the application to look at the concepts being described. ETPM Functions & Features: Accounting Overview:  https://oracletalk.webex.com/oracletalk/ldr.php?AT=pb&SP=MC&rID=67367292&rKey=443823012d0fc43e ETPM Functions & Features: Accounting - P&I, Waivers:  https://oracletalk.webex.com/oracletalk/ldr.php?AT=pb&SP=MC&rID=67432777&rKey=64eb220a56d8e32f  ETPM Functions & Features - Accounting - Rates:  https://oracletalk.webex.com/oracletalk/ldr.php?AT=pb&SP=MC&rID=67370637&rKey=63ca9024ce3b4398

    Read the article

  • OWB – OWBLand on SourceForge

    - by David Allan
    There are a bunch of interesting utilities that are either experts or OMB scripts that are hosted on SourceForge by some keen OWB users (see the home here). One of the main initiatives has been an Excel to OWB ‘one click ETL’ utility, which looks to have had a fair amount of code added, there is an example but its kinda light on documentation, but does look like it covers quite a lot. One of the nice things about SourceForge is that you can peek into the statistics and see what kind of activity has gone on, from last August there have been a bunch of downloads with a big peak last November… Another utility that is there is one to generate OMB from a mapping definition, a bunch of useful stuff there - http://sourceforge.net/projects/owbland/files/

    Read the article

  • Oracle Database 11g bevet&eacute;s k&ouml;zben: Val&oacute;s felhaszn&aacute;l&oacute;i tapasztalatok

    - by Lajos Sárecz
    Tavaly tartott Magyarországon is Upgrade Workshop-ot Mike Dietrich, aki az alábbi videóban néhány érdekes ügyfél sztorit oszt meg azzal kapcsolatban, miért érdemes Oracle Database 11g-re váltani, milyen elonyei származtak azoknak az ügyfeleknek, akik már túl vannak az upgrade folyamatán. Ha nem elég meggyozoek a külföldi példák, akkor 3 hét múlva a HOUG Konferencia “Korszeru adatközpontok” szekciójában magyar ügyfelek 11g upgrade történetei is megismerhetok lesznek.

    Read the article

  • Mobile Deals: the Consumer Wants You in Their Pocket

    - by Mike Stiles
    Mobile deals offer something we talk about a lot in social marketing, relevant content. If a consumer is already predisposed to liking your product and gets a timely deal for it that’s easy and convenient to use, not only do you score on the marketing side, it clearly generates some of that precious ROI that’s being demanded of social. First, a quick gut-check on the public’s adoption of mobile. Nielsen figures have 55.5% of US mobile owners using smartphones. If young people are indeed the future, you can count on the move to mobile exploding exponentially. Teens are the fastest growing segment of smartphone users, and 58% of them have one. But the largest demographic of smartphone users is 25-34 at 74%. That tells you a focus on mobile will yield great results now, and even better results straight ahead. So we can tell both from statistics and from all the faces around you that are buried in their smartphones this is where consumers are. But are they looking at you? Do you have a valid reason why they should? Everybody likes a good deal. BIA/Kelsey says US consumers will spend $3.6 billion this year for daily deals (the Groupons and LivingSocials of the world), up 87% from 2011. The report goes on to say over 26% of small businesses are either "very likely" or "extremely likely" to offer up a deal in the next 6 months. Retail Gazette reports 58% of consumers shop with coupons, a 40% increase in 4 years. When you consider that a deal can be the impetus for a real-world transaction, a first-time visit to a store, an online purchase, entry into a loyalty program, a social referral, a new fan or follower, etc., that 26% figure shows us there’s a lot of opportunity being left on the table by brands. The existing and emerging technologies behind mobile devices make the benefits of offering deals listed above possible. Take how mobile payment systems are being tied into deal delivery and loyalty programs. If it’s really easy to use a coupon or deal, it’ll get used. If it’s complicated, it’ll be passed over as “not worth it.” When you can pay with your mobile via technologies that connects store and user, you get the deal, you get the loyalty credit, you pay, and your receipt is uploaded, all in one easy swipe. Nothing to keep track of, nothing to lose or forget about. And the store “knows” you, so future offers will be based on your tastes. Consider the endgame. A customer who’s a fan of your belt buckle store’s Facebook Page is in one of your physical retail locations. They pull up your app, because they’ve gotten used to a loyalty deal being offered when they go to your store. Voila. A 10% discount active for the next 30 minutes. Maybe the app also surfaces social references to your brand made by friends so they can check out a buckle someone’s raving about. If they aren’t a fan of your Page or don’t have your app, perhaps they’ve opted into location-based deal services so you can still get them that 10% deal while they’re in the store. Or maybe they’ve walked in with a pre-purchased Groupon or Living Social voucher. They pay with one swipe, and you’ve learned about their buying preferences, credited their loyalty account and can encourage them to share a pic of their new buckle on social. Happy customer. Happy belt buckle company. All because the brand was willing to use the tech that’s available to meet consumers where they are, incentivize them, and show them how much they’re valued through rewards.

    Read the article

  • Social Search: Looking for Love

    - by Mike Stiles
    For marketers and enterprise executives who have placed a higher priority on and allocated bigger budgets to search over social, it might be time to notice yet another shift that’s well underway. Social is search. Search marketing was always more of an internal slam-dunk than other digital initiatives. Even a C-suite that understood little about the new technology world knew it’s a good thing when people are able to find you. Google was the new Yellow Pages. Only with Google, you could get your listing first without naming yourself “AAAA Plumbing.” There were wizards out there who could give your business prominence in front of people who were specifically looking for what you offered. Other search giants like Bing also came along to offer such ideal matchmaking possibilities. But what if the consumer isn’t using a search engine to find what they’re looking for? And what if the search engines started altering their algorithms so that search placement manipulation was more difficult? Both of those things have started to happen. Experian Hitwise’s numbers show that visits to the major search engines in the UK dropped 100 million through August. Search engines are far from dead, or even challenged. But more and more, the public is discovering the sites and brands they need through advice they get via social, not search. You’ll find the worlds of social and search increasingly co-mingling as well. Search behemoths Google and Bing are including Facebook and Google+ into their engines. Meanwhile, Facebook and Twitter have done some integration of global web search into their platforms. So what makes social such a worthwhile search entity for brands? First and foremost, the consumer has demonstrated a behavior of acting on recommendations from social connections. A cry in the wilderness like, “Anybody know any good catering companies?” will usually yield a link (and an endorsement) from a friend such as “Yeah, check out Just-Cheese-Balls Catering.” There’s no such human-driven force/influence behind the big search engines. Facebook’s Mark Zuckerberg and others call it “Friend Mining.” It is, in essence, searching for answers from friends’ experiences as opposed to faceless code. And Facebook has all of those friends’ experiences already stored as data. eMarketer says search in an $18 billion business, and investors are really into it. So no shock Facebook’s ready to leverage their social graph into relevant search. What do you do about all this as a brand? For one thing, it’s going to lead to some interesting paid marketing opportunities around the corner, including Sponsored Stories bought against certain queries, inserting deals into search results, capitalizing on social search results on mobile, etc. Apart from that, it might be time to stop mentally separating social and search in your strategic planning and budgeting. Courting your fans on social will cumulatively add up to more valuable, personally endorsed recommendations for your company when a consumer conducts a search on social. Fail to foster those relationships, fail to engage, fail to provide knock-em-dead customer service, fail to wow them with your actual products and services…and you’ll wind up with the visibility you deserve in social search results.

    Read the article

  • The Use-Case Driven Approach to Change Management

    - by Lauren Clark
    In the third entry of the series on OUM and PMI’s Pulse of the Profession, we took a look at the continued importance of change management and risk management. The topic of change management and OUM’s use-case driven approach has come up in few recent conversations. So I thought I would jot down a few thoughts on how the use-case driven approach aids a project team in managing the project’s scope. The use-case model is one of several tools in OUM that is used to establish and manage the project's scope.  Because a use-case model can be understood by both business and IT project team members, it can serve as a bridge for ongoing collaboration as well as a visual diagram that encapsulates all agreed-upon functionality. This makes it a vital artifact in identifying changes to the project’s scope. Here are some of the primary benefits of using the use-case model as part of the effort for establishing and managing project scope: The use-case model quickly communicates scope in a straightforward manner. All project stakeholders can have a common foundation for the decisions regarding architecture and design and how they relate to the project's objectives. Once agreed upon, the model can be put under change control and any updates to the model can then be quickly identified as potentially affecting the project’s scope.  Changes requested or discovered later in the project can be analyzed objectively for their impact on project's budget, resources and schedule. A modular foundation for the design of the software solution can be established in Elaboration.  This permits work to be divided up effectively and executed in so that the most important and riskiest use-cases can be tackled early in the project. The use-case model helps the team make informed decisions about implementation priorities, which allows effective allocation of limited project resources.  This is very helpful in not only managing scope, but in doing iterative and incremental planning which relies heavily on the ability to identify project priorities. Bottom line is that the use-case model gives the project team solid understanding of scope early in the project.  Combine this understanding with effective project management and communication and you have an effective tool for reducing the risk of overruns in budget and/or time due to out of control scope changes. Now that you’ve had a chance to read these thoughts on the use-case model and project scope, please let me know your feedback based on your experience.

    Read the article

  • Summer Upgrade Workshops are Open!

    - by roy.swonger
    The listing of upcoming events is located in the right sidebar of the main blog page, down below the flag counter. If you haven't checked out our schedule lately, you might be surprised at how active we will be with travel this summer. Coming up next week will be upgrade workshops in the USA (St. Louis and Minneapolis) followed by a pair in Canada (Toronto and Montreal) and then two in Europe (Brussels and Utrecht). Make your plans now to attend an upgrade workshop in your area. As you can see from the long list of planned events, it is very likely that Mike or I will be coming to your area sometime soon!

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >