Search Results

Search found 31628 results on 1266 pages for 'legacy database'.

Page 489/1266 | < Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >

  • What's Your Supply Chain+Manufacturing Strategy for Success

    - by [email protected]
    Forward thinking enterprises look to eliminate their dependence on legacy applications that manage information in batch - replacing them with real-time integrated/modern information managment. With rapid manufacturing and global supply chains much more complex today, with the pace of chance ever increasing, leading organizations need better ways to orchestrate their supply chain synchronization with their partner and customer base. EM magazine Mar/Apr'10 edition, covers this topic in an article "Strategising for Success" pgs 26-27, and discusses the available options to organizations as they drive improvements in the levels of collaboration with their partners, suppliers, shippers, distributors and ultimately their end-users, the customer! I'll past the link to the article here as soon as i validate/confirm it.

    Read the article

  • obiee 10g teradata Solaris deployment

    - by user554629
    I have 3-4 years worth of notes on proper Teradata deployment across multiple operating systems.   The topic that is too large to cover succinctly in a blog entry.   I'm trying something new:  document a specific situation, consolidate the facts, document diagnostic procedures and then clone the structure to cover other obiee deployments (11g and other operating systems). Until the icon below is removed, this blog entry may be revised frequently.  No construction between June 6th through June 25th. Getting started obiee 10g certification:  pg 24-25 Teradata V2R5.1.x - V2R6.2, Client 13.10, certified 10.1.3.4.1obiee 10g documentation: Deployment Guide, Server Administration, Install/Config Guideobiee overview: teradata connectivity downloads: ( requires registration )solaris odbc drivers: sparc 13.10:  Choose 13.10.00.04  ( ReadMe ) sparc 14.00: probably would work, but not certified by Oracle on 10g I assume you have obiee 10.1.3.4.1 installed; 10.1.3.4.2 would be a better choice. Teradata odbc install requires root for Solaris pkgadd Only 1 version of Teradata odbc can be installed.symbolic links to the current version are created in /usr/lib at install obiee implementation background database access has two types of implementation:  native and odbcnative drivers use DB vendor client interfaces for accessodbc drivers are provided by the DB vendor for DB accessTeradata is an odbc interface Database. odbc drivers require an ODBC Driver Managerobiee uses Merant Data Direct driver manager obiee servers communicate with one another using odbc.The internal odbc driver is implemented by the obiee team and requires Merant Driver Manager. Teradata supplies a Driver Manager, which is built by Merant, but should not be used in obiee. The nqsserver shared library deployment looks like this  OBIEE Server<->DataDirect Manager<->Teradata Driver<->Teradata Database nqsserver startup $ cd $BI/setup$ . ./sa-init64.sh$ run-sa.sh autorestart64 The following files are referenced from setup:  .variant.sh  user.sh  NQSConfig.INI  DBFeatures.INI  $ODBCINI ( odbc.ini )  sqlnet.ora How does nqsserver connect to Teradata? A teradata DSN is created in the RPD. ( TD71 )setup/odbc.ini contains: [ODBC Data Sources] TD71=tdata.so[TD71]Driver=/opt/tdodbc/odbc/drivers/tdata.soDescription=Teradata V7.1.0DBCName=###.##.##.### LastUser=Username=northwindPassword=northwindDatabase=DefaultDatabase=northwind setup/user.sh contains LIBPATH\=/opt/tdicu/lib_64\:/usr/odbc/lib\:/usr/odbc/drivers\:/usr/lpp/tdodbc/odbc/drivers\:$LIBPATHexport LIBPATH   setup/.variant.sh contains if [ "$ANA_SERVER_64" = "1" ]; then  ANA_BIN_DIR=${SAROOTDIR}/server/Bin64  ANA_WEB_DIR=${SAROOTDIR}/web/bin64  ANA_ODBC_DIR=${SAROOTDIR}/odbc/lib64         setup/sa-run.sh  contains . ${ANA_INSTALL_DIR}/setup/.variant.sh. ${ANA_INSTALL_DIR}/setup/user.sh logfile="${SAROOTDIR}/server/Log/nqsserver.out.log"${ANA_BIN_DIR}/nqsserver -quiet >> ${logfile} 2>&1 &   nqsserver is running: nqsserver produces $BI/server/nqsserver.logAt startup, the native database drivers connect and record DB versions.tdata.so is not loaded until a Teradata DB connection is attempted.    Teradata odbc client installation Accept all the defaults for pkgadd.   Install in /opt. $ mkdir odbc$ cd odbc$ gzip -dc ../tdodbc__solaris_sparc.13.10.00.04.tar.gz | tar -xf - $ sudo su# pkgadd -d . TeraGSS# pkgadd -d . tdicu1310# pkgadd -d . tdodbc1310   Directory Notes: /opt/teradata/client/13.10/odbc_64/lib/tdata.soThe 64-bit obiee library loaded by nqsserver. /opt/teradata/client/13.10/odbc_64/lib is not needed in LD_LIBRARY_PATH /opt/teradata/client/13.10/tdicu/lib64is needed in LD_LIBRARY_PATH /usr/odbc should not be referenced;  it is a link to 32-bit libraries LD_LIBRARY_PATH_64 should not be used.     Useful bash functions and aliases export SAROOTDIR=/export/home/dw_adm/OracleBIexport TERA_HOME=/opt/teradata/client/13.10 export ORACLE_HOME=/export/home/oracle/product/10.2.0/clientexport ODBCINI=$SAROOTDIR/setup/odbc.iniexport TD_ICU_DATA=$TERA_HOME/tdicu/lib64alias cds="alias | grep '^alias cd' | sed 's/^alias //' | sort"alias cdtd="cd $TERA_HOME; ls" alias cdtdodbc="cd $TERA_HOME/odbc_64; ls -l"alias cdtdicu="cd $TERA_HOME/tdicu/lib64; ls -l"alias cdbi="cd $SAROOTDIR; ls"alias cdbiodbc="cd $SAROOTDIR/odbc; ls -l"alias cdsetup="cd $SAROOTDIR/setup; ls -ltr"alias cdsvr="cd $SAROOTDIR/server; ls"alias cdrep="cd $SAROOTDIR/server/Repository; ls -ltr"alias cdsvrcfg="cd $SAROOTDIR/server/Config; ls -ltr"alias cdsvrlog="cd $SAROOTDIR/server/Log; ls -ltr"alias cdweb="cd $SAROOTDIR/web; ls"alias cdwebconfig="cd $SAROOTDIR/web/config; ls -ltr"alias cdoci="cd $ORACLE_HOME; ls"pkgfiles() { pkgchk -l $1 | awk  '/^Pathname/ {print $2}'; }pkgfind()  { pkginfo | egrep -i $1 ; } Examples: $ pkgfind td$ pkgfiles tdodbc1310 | grep 64$ cds$ cdtdodbc$ cdsetup$ cdsvrlog$ cdweblog

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Hosting a web application on discountasp.net using sql ce 5

    - by David Stanley
    I am hoping that someone may have experience with this, since the discountasp site is very lacking in straightforward answers. I am building a lightweight web application and have decided to have sql ce as the database for it. Two questions regarding this: Do i need to get an actual database hosted as well as the site, in order for it to work? Do you know if discountasp supports the use of sql ce (not with webmatrix or any cms builds, completely custom)? If they don't, do you have any experience/recommendations with getting this done?

    Read the article

  • Winnipeg SQL Server UG April Event &ndash; How To Do An Index Review

    - by D'Arcy Lussier
    April Event - How to Do an Index Review April 14th, 2010 5:30 - 8:00 17th Floor Conference Room, Richardson Building One Lombard Place, Winnipeg Pizza and Drinks Provided! Did you know that SQL Server 2005+ keeps query execution statistics, index usage statistics and even missing index statistics?  Learn how to access this information and use it to help you make good decisions about what your database really needs in terms of indexes in a lot less time than you might think an index review should take.  There are 6 or 7 (depending on your version of SQL server) DMVs (dynamic management views) to look at which reveal a lot about your database and how you can improve its performance. To register for this event, please click HERE to register!

    Read the article

  • Desktop Fun: Springtime Personas Themes for Firefox

    - by Asian Angel
    With weeks of winter weather left to go, it can be a bit depressing to look outside and see nothing but bland, lifeless scenery. To help bridge the time gap until you can open the windows, enjoy the warmth of the sun, and feel the spring breeze upon your face we present our Springtime Personas Themes for Firefox collection Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines SnapBird Supercharges Your Twitter Searches Google’s New Personal Blocklist Extension Kills Search Engine Spam KeyCounter Tracks Your Keystrokes and Mouse Clicks Add Custom LED Ambient Lighting to Your PC or Media Center The Trackor Monitors Amazon Prices; Integrates with Chrome, Firefox, and Safari Four Awesome TRON Legacy Themes for Chrome and Iron

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Smart Grid Gateway and New Meter Data Management released

    - by Anthony Shorten
    Two products have just been released and are available from edlivery.oracle.com. Smart Grid Gateway 2.0.0 - A new product to integrate to Smart Grid networks Meter Data Management 2.0.1 - A new version of the Meter Data Management product. These products are the first products to use the brand new version of the Oracle Utilities Applicaton Framework (V4.1). The new framework builds up on FW2.2 and FW4.0.2 to add exciting new features (this is just a subset): Support for Database Vault Enhancements to Business Object Maintenance Batch Statistics Portal for benchmarking Custom template user exit support File permissions now consistent with other Oracle products Use of Universal Connection Pool for all database pool access Ability to manage the batch data cache Over the next few weeks I will be publishing articles and updates to existing whitepapers to highlight all the new features.

    Read the article

  • Announcement: Oracle Solaris 11.1

    - by uwes
    On October 3rd at Oracle OpenWorld John Fowler announced Oracle Solaris 11.1 . This first update to Oracle Solaris 11 increases uptime for the Oracle Database: 8x faster database shutdown and start-up Helps DBAs find and resolve I/O issues increasing performance 1.2x Oracle RAC throughput Oracle Solaris 11.1 drives up network utilization by extending network virtualization to include Edge Virtual Bridging and Data Center Bridging that help manage network bandwidth for high priority services and applications. Learn more and share these valuable tools with your customers to encourage them to deploy Oracle Solaris 11.1 Read Press Release here Oracle Solaris 11.1 Data Sheet (PDF) What's New in Solaris 11.1 Oracle Solaris 11.1 FAQs Join the the online web event Oracle Solaris 11 Innovations for your Data Center on November 7, 2012

    Read the article

  • Certify August Updates

    - by Sadia2
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We have added some release and platform certifications to MOS Certify. Applications : Oracle Demantra Demand Management 7.3.1.5, Oracle Demantra Predictive Trade Planning 7.3.1.5, Oracle Demantra Sales and Operations Planning 7.3.1.5 Database: Oracle Database Client 12.1.0.1.0 11.2.0.4.0, Oracle Clusterware 11.2.0.4.0, Oracle Database 11.2.0.4.0, Oracle Real Application Clusters 11.2.0.4.0 E-Business Suite: Oracle E-Business Suite 12.1.3, Oracle E-Business Suite 12.1.2, Oracle E-Business Suite 12.1.1, Oracle E-Business Suite 12.0.6, Oracle E-Business Suite 11.5.10.2 Edge Applications: Oracle Transportation Management 6.3.2 Enterprise Manager: Oracle Application Management Pack for Oracle E-Business Suite 12.1.0.1.0 Fusion Middleware: Discoverer Administrator 11.1.1.6.0, Discoverer Desktop 11.1.1.6.0, Forms Builder 11.1.1.6.0, Oracle Application Development Framework 11.1.1.6.0, Oracle Application Development Runtime 11.1.1.6.0, Oracle Business Intelligence Publisher 11.1.1.6.0, Oracle Directory Services Manager 11.1.1.6.0, Oracle Forms 11.1.1.6.0, Oracle GoldenGate 11.1.1.1.0, 11.1.1.1.2, 11.1.1.1.1, Oracle GoldenGate Application Adapters 11.1.1.1.1, Oracle Identity and Access Management 11.1.2.0.0, 11.1.2.1.0, Oracle Identity Federation 11.1.1.6.0, Oracle Real-Time Decision Load Generator 11.1.1.7.0, Oracle Real-Time Decision Studio 11.1.1.7.0, Oracle Real-Time Decisions 11.1.1.6.0, Oracle Reports 11.1.1.6.0, Oracle Segmentation Server 11.1.1.6.0, Oracle Virtual Directory 11.1.1.6.0, Oracle Web Cache 11.1.1.6.0, Oracle WebCenter Content Imaging 11.1.1.8.0, Oracle WebCenter Content Inbound Refinery Server 11.1.1.8.0, Oracle WebCenter Content Records 11.1.1.8.0, Oracle WebCenter Content Rights 11.1.1.8.0, Oracle WebCenter Content UI 11.1.1.8.0, Oracle WebCenter Enterprise Capture 11.1.1.8.0, Oracle WebCenter Portal 11.1.1.8.0, Oracle WebCenter Sites 11.1.1.8.0, Oracle WebCenter Sites: CIP for EMC Documentum 11.1.1.8.0, Oracle WebCenter Sites: CIP for File Systems and MS SharePoint 11.1.1.8.0, Oracle WebCenter Sites: Community-Gadgets 11.1.1.8.0, Oracle WebCenter Sites: Explorer 11.1.1.8.0, Oracle WebCenter Universal Content Management 11.1.1.8.0, Reports Builder 11.1.1.6.0, Oracle WebCenter Content Records 11.1.1.8.0, Oracle WebCenter Content Rights 11.1.1.8.0, Oracle WebCenter Content UI 11.1.1.8.0, Oracle WebCenter Sites: Developer Tools 11.1.1.8.0 FSGBU Insurance Group : Oracle Health Insurance Claims 2.13.3.0.0, 2.13.2.0.0, 2.13.1.0.0 JD Edwards EnterpriseOne: JD Edwards EnterpriseOne Tools 9.1.3.0, 9.1.2.0, 9.1.0.0 JD Edwards World: JD Edwards World Service Enablement A93SE, A931SE PeopleSoft: PeopleSoft PeopleTools 8.52 Siebel Enterprise: Siebel Application Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel CRM Desktop Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Database Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel HI Web Client 8.2.2.2.0, 8.1.1.9.0, Siebel Gateway Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Outlook Add-in Client 8.2.2.2.0, Siebel Remote Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Tools Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Web Server Extension 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • links for 2010-03-24

    - by Bob Rhubart
    @dhinchcliffe: When online communities go to work "As we see a growing set of examples of successful online communities in the enterprise space (both internally and externally), the broad outlines are emerging of what is turning into a vital new channel for innovation, business agility, customer relationships, and productive output for most organizations: Online communities as one of the most potent new ways to achieve business objectives, both in terms of cost and quality." -- Dion Hinchcliffe (tags: enterprisearchitecture entarch enterprise2.0 socialmedia) Steven Chan: WebCenter 11g (11.1.1.2) Certified with E-Business Suite Release 12 Steven Chan shares information on WebCenter 11g's (11.1.1.2) certification with Oracle E-Business Suite Release 12, along with a list of certified EBS 12 Platforms (tags: oracle otn enterprise2.0 webcenter ebs) @oraclenerd: 1Z0-052 - Exploring the Oracle Database Architecture Oracle ACE Chet "Oraclenerd" Justice shares a list of resources/documentation covering Oracle Database Architecture. (tags: oracle otn oracleace dba certification architecture) @oraclenerd: 1Z0-052 - Books "I don't believe I have ever purchased a book on or about Oracle. The documentation provided, especially for the database, is top notch. There is so much information available out there if you just know how to find it. Reading AskTom for years didn't hurt either." -- Chet "@oraclenerd" Justice. (tags: otn oracle oracleace certification dba) Lucas Jellema: Castle in the clouds – Building the Connexys SaaS application with Fusion Middleware Oracle ACE Director Lucas Jellema shares the slides from the presentation he and colleague Arne van der Ing submitted for OBUG 2010. (tags: otn oracle oracleace cloud saas obug fusionmiddleware connexys) John Burke: Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise "[ERP] has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information. And through modern middleware it will connect to everything. So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise." -- John Burke, Group VP, Applications Business Unit, Oracle (tags: oracle otn entarch erp) Darwin-IT: Postfix for handling mail in your integration solution "It took me some time to understand Postfix. I was quite overwhelmed by the options. And it took me some time to figure out how to configure it for this particular usecase...But as with most other things..it turns out to be simple." -- Martien van den Akker (tags: oracle linux soa postfix) TheServerSide.com: Cameron Purdy at TSSJS 2010: If Java beats C++, what's next? ''It turns out that Java performance is much better on modern architecture. That is because of multicore processors and in-lining.'' -- Cameron Purdy, as quoted in an article by Jack Vaughn (tags: oracle java otn c++)

    Read the article

  • ArchBeat Link-o-Rama for 11/17/2011

    - by Bob Rhubart
    Building an Infrastructure Cloud with Oracle VM for x86 + Enterprise Manager 12c | Richard Rotter Richard Rotter demonstrates "how easy it could be to build a cloud infrastructure with Oracle's solution for cloud computing." Article: Social + Lean = Agile | Dave Duggal In today’s increasingly dynamic business environment, organizations must continuously adapt to survive. Change management has become a major bottleneck. Organizations’ need a practical mechanism for managing controlled variance and change in-flight to break the logjam. This paper provides a foundation for applying lean and agile principles to achieve Enterprise Agility through social collaboration. Stress Testing Java EE 6 Applications - Free Article In Free Java Magazine : Adam Bien "It is strange," says Adam Bien, "everyone is obsessed about green bars and code coverage, but testing of multi threaded behavior is widely ignored - until the applications run into massive problems." Using Access Manager to Secure Applications Deployed on WebLogic | Rene van Wijk Another great how-to post from Oracle ACE Rene van Wijk, this time involving JBoss RichFaces, Facelets, Oracle Coherence, and Oracle WebLogic Server. DOAG 2011 vs. Devoxx - Value and Attraction | Markus Eisele Oracle ACE Director Markus Eisele compares and contrasts these popular conferences with the aim of helping others decide which to attend. SOA All the Time; Architects in AZ; Clearing Info Integration hurdles SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Webcast: Oracle Business Intelligence Mobile Event Date: Wednesday, December 7, 2011 Time: 10 a.m. PT/1 p.m. ET Featuring Manan Goel (Director BI Product Marketing, Oracle) and Shailesh Shedge (Director BI and Analytics Practice, Ascentt). Webcast: Maximum Availability on Private Clouds A discussion of Oracle’s Maximum Availability Architecture, Oracle Database 11g, Oracle Exadata Database Machine, and Oracle Database appliance, featuring Margaret Hamburger (Director, Product Marketing, Oracle) and Joe Meeks (Director, Product Management, Oracle). November 30, 2011 at 10:00am PT / 1:00pm ET. Oracle Technology Network Architect Day - Phoenix, AZ Wednesday December 14, 2011, 8:30am - 5:00pm. The Ritz-Carlton, Phoenix, 2401 East Camelback Road, Phoenix, AZ 85016. Registration is free, but seating is limited.

    Read the article

  • Software center not opening

    - by kishore kumar
    $ software-center 2012-09-07 18:45:04,349 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/dbus/proxies.py', 410, '_introspect_error_handler')' 2012-09-07 18:45:04,349 - dbus.proxies - ERROR - Introspect error on :1.128:/com/ubuntu/Softwarecenter: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. 2012-09-07 18:45:29,406 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-09-07 18:45:29,409 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-09-07 18:45:29,822 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-09-07 18:45:29,973 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-09-07 18:45:29,991 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Killed

    Read the article

  • Okular can't read pdf files

    - by hoang anh Nguyen
    I recently have installed Okular on my Ubuntu 14.04. The problem is when I open pdf files, okular gives me the error "Can not find a plugin which is able to handle the document being passed." When I ran Okular by Terminal, this is the message I get. okular(14100)/kdeui (KIconLoader): Error: standard icon theme "oxygen" not found! okular(14100)/kdeui (KIconLoader): Error: standard icon theme "oxygen" not found! okular(14100) KPixmapSequence::Private::loadSequence: Invalid pixmap specified. okular(14100) KPixmapSequence::Private::loadSequence: Invalid pixmap specified. okular(14100) KPixmapSequence::frameSize: No frame loaded okular(14100) KPixmapSequence::Private::loadSequence: Invalid pixmap specified. okular(14100) KPixmapSequence::frameSize: No frame loaded okular(14100) KPixmapSequence::Private::loadSequence: Invalid pixmap specified. okular(14100) KPixmapSequence::frameSize: No frame loaded okular(14100) KPixmapSequence::Private::loadSequence: Invalid pixmap specified. okular(14100) KPixmapSequence::frameSize: No frame loaded okular(14100) KPixmapSequence::Private::loadSequence: Invalid pixmap specified. okular(14100) KPixmapSequence::frameSize: No frame loaded okular(14100) KPixmapSequence::Private::loadSequence: Invalid pixmap specified. okular(14100) KPixmapSequence::frameSize: No frame loaded okular(14100): No ksycoca4 database available! okular(14100)/kdecore (trader) KServiceTypeTrader::defaultOffers: KServiceTypeTrader: serviceType "okular/Generator" not found okular(14100)/kdecore (KConfigSkeleton) KCoreConfigSkeleton::writeConfig: okular(14100)/kdecore (KConfigSkeleton) KCoreConfigSkeleton::writeConfig: okular(14100)/kdecore (KConfigSkeleton) KCoreConfigSkeleton::writeConfig: okular(14100)/kdecore (KConfigSkeleton) KCoreConfigSkeleton::writeConfig: okular(14100)/kdecore (KConfigSkeleton) KCoreConfigSkeleton::writeConfig: okular(14100): No ksycoca4 database available! okular(14100)/kdecore (trader) mimeTypeSycocaServiceOffers: KMimeTypeTrader: mimeType "application/pdf" not found okular(14100): No ksycoca4 database available! okular(14100)/kdecore (trader): KMimeTypeTrader: couldn't find service type "okular/Generator" Please ensure that the .desktop file for it is installed; then run kbuildsycoca4. okular(14100)/okular (app) Okular::Document::openDocument: No plugin for mimetype '"application/pdf"'. okular(14100): Couldn't start knotify from knotify4.desktop: "KLauncher could not be reached via D-Bus. Error when calling start_service_by_desktop_path: The name org.kde.klauncher was not provided by any .service files " okular(14100)/kdeui (KNotification) KNotification::slotReceivedIdError: Error while contacting notify daemon "The name org.kde.knotify was not provided by any .service files" X Error: BadWindow (invalid Window parameter) 3 Major opcode: 20 (X_GetProperty) Resource id: 0x2a0002e okular(14110) KPixmapSequence::Private::loadSequence: Invalid pixmap specified. okular(14110) KPixmapSequence::frameSize: No frame loaded okular(14110) KPixmapSequence::Private::loadSequence: Invalid pixmap specified. okular(14110) KPixmapSequence::frameSize: No frame loaded okular(14110) KPixmapSequence::Private::loadSequence: Invalid pixmap specified. okular(14110) KPixmapSequence::frameSize: No frame loaded X Error: BadWindow (invalid Window parameter) 3 Major opcode: 20 (X_GetProperty) Resource id: 0x2a0001d X Error: BadWindow (invalid Window parameter) 3 Major opcode: 20 (X_GetProperty) Resource id: 0x2a0001d I would be much appreciated for any suggestion to solve this problem. Thanks a lot :)

    Read the article

  • Epsilon : An Oracle Customer Profile

    - by Anand Akela
    ZDNet published an article today based on the interview of Jeff White, vice president, technology, strategic database services at Epsilon. Jeff discussed Oracle Exadata Database Machine and Oracle Enterprise Manager with the ZDNet writer Dan Kusnetzky . Read the article  Epsilon : An Oracle Customer Profile . Jeff White, Epsilon VP, was honored with Oracle’s Data Warehouse Leader of the Year for Innovative Data Warehouse Deployment of Oracle Exadata and Oracle Enterprise Manager earlier this year. In one of the videos earlier this year, Jeff mentioned that Epsilon has streamlined IT administration, monitoring, and engineered systems maintenance with Oracle Enterprise Manager. Having gained in operational efficiencies, Epsilon is now providing greater efficiencies to its customers. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Reuse the data CRUD methods in data access layer, but they are updated too quickly

    - by ValidfroM
    I agree that we should put CRUD methods in a data access layer, However, in my current project I have some issues. It is a legacy system, and there are quite a lot CRUD methods in some concrete manager classes. People including me seem to just add new methods to it, rather than reuse the existing methods. Because We don't know whether the existing method is what we need Even if we have source code, do we really need read other's code then make decision? It is updated too quickly. Do not have time get familiar with the DAO API. Back to the question, how do you solve that in your project? If we say "reuse", it really needs to be reusable rather than just an excuse.

    Read the article

  • The Calmest IT Guy in the World

    - by Markus Weber
    Unplanned outages and downtime still result not only in major productivity losses, but also major financial losses. Along the same lines, if zero planned downtime and zero data loss are key to your IT environment and your business requirements, planning for those becomes very important, all while balancing between performance, high availability, and cost. Oracle Database High Availability technologies will help you achieve these mission-critical goals, and are reflected in Oracle's best practices offerings of the Maximum Availability Architecture, or MAA. We created three neat, short videos showcasing some typical use cases, and highlighting three important components (amongst many more) of MAA: Oracle Real Application Clusters (RAC) Oracle Active Data Guard Oracle Flashback Technologies Make sure to watch those videos here, and learn about challenges, and solutions, around High Availability database environments from a recent Independent Oracle Users Group (IOUG) survey. 

    Read the article

  • In Search of Automatic ORM with REST interface

    - by Dan Ray
    I have this wish that so far Google hasn't been able to fulfill. I want to find a package (ideally in PHP, because I know PHP, but I guess that's not a hard requirement) that you point at a database, it builds an ORM based on what it finds there, and exposes a REST interface over the web. Everything I've found in my searches requires a bunch of code--like, it wants you to build the classes for it, but it'll handle the REST request routing. Or it does database and relational stuff just fine, but you have to build your own methods for all the CRUD actions. That's dumb. REST is well defined. If I wanted to re-invent the wheel, I totally could, but I don't want to. Isn't there somebody who's built a one-shot super-simple auto-RESTing web service package?

    Read the article

  • Automate RAC Cluster Upgrades using EM12c

    - by HariSrinivasan
    One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.With the release of Database Plug-in  (12.1.0.5.0), EM12c Rel 3 (12.1.0.3.0)  now supports automated upgrading of RAC Clusters in addition to Standalone Databases. This automation includes: Upgrade of the complete Cluster across the nodes. ( Example: 11.1.0.7 CRS, ASM, RAC DB  ->   11.2.0.4 or 12.1.0.1 GI, RAC DB)  Best practices in tune with your operations, where you can automate upgrade in steps: Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs). Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster). Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required. Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to : a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN) b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup c. Ignore backup completely, if there are external mechanisms already in place.  High Level Steps: Select the Procedure "Upgrade Database" from Database Provisioning Home page. Choose the Target Type for upgrade and the Destination version Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already Select the Gold Image of the destination version for deploying both the GI and RAC OHs Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts Set the Break points in the procedure execution to isolate Downtime activities Submit and track the procedure's execution status.  The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link. Explore the functionality!! In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

    Read the article

< Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >