Search Results

Search found 41365 results on 1655 pages for 'oracle database 11g advan'.

Page 698/1655 | < Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • SQL query to translate a list of numbers matched against several ranges, to a list of values

    - by Claes Mogren
    I need to convert a list of numbers that fall within certain ranges into a list of values, ordered by a priority column. The table has the following values: | YEAR | R_MIN | R_MAX | VAL | PRIO | ------------------------------------ 2010 18000 90100 52 6 2010 240000 240099 82 3 2010 250000 259999 50 5 2010 260000 260010 92 1 2010 330000 330010 73 4 2010 330011 370020 50 5 2010 380000 380050 84 2 The ranges will be different for different years. The ranges within one year will never overlap. The input will be a year and a list of numbers that might fall within one these ranges. The list of input number will be small, 1 to 10 numbers. Example of input numbers: (20000, 240004, 375000, 255000) With that input I would like to get a list ordered by the priority column, or a single value: 82 50 52 The only value I'm interested in here is 82, so UNIQUE and MAX_RESULTS=1 would do. It can easily be done with one query per number, and then sorting it in the Java code, but I would prefer to do it in a single SQL query. What SQL query, to be run in an Oracle database, would give me the desired result?

    Read the article

  • How should I name my SQL query files? Should I use some methodology?

    - by Mehper C. Palavuzlar
    We have an Oracle 10g database (a huge one) in our company, and I provide employees with data upon their requests. My problem is, I save almost every SQL query I wrote, and now my list has grown too much. I want to organize and rename these .sql files so that I can find the one I want easily. At the moment, I'm using some folders named as Sales Dept, Field Team, Planning Dept, Special etc. and under those folders there are .sql files like Delivery_sales_1, Delivery_sales_2, ... Sent_sold_lostsales_endpoints, ... Sales_provinces_period, Returnrates_regions_bymonths, ... Jack_1, Steve_1, Steve_2, ... I try to name the files regarding their content but this makes file names longer and does not completely meet my needs. Sometimes someone comes and demands a special report, and I give the file his name, but this is also not so good. I know duplicates or very similar files are growing in time but I don't have control over them. Can you show me the right direction to rename all these files and folders and organize my queries for easy and better control? TIA.

    Read the article

  • JDBC programms running long time performance issue

    - by phyerbarte
    My program has an issue with Oracle query performance, I believe the SQL have good performance, because it returns quickly in SQLPlus. But when my program has been running for a long time, like 1 week, the SQL query (using JDBC) becomes slower (In my logs, the query time is much longer than when I originally started the program). When I restart my program, the query performance comes back to normal. I think it is could be something wrong with the way I use the preparedStatement, because the SQL I'm using does not use placeholders "?" at all. Just a complex select query. The query process is done by a util class. Here is the pertinent code building the query: public List<String[]> query(String sql, String[] args) { Connection conn = null; conn = openConnection(); conn.setAutocommit(true); .... PreparedStatement preStatm = null; ResultSet rs = null; ....//set preparedstatment arg code rs = preStatm.executeQuery(); .... finally{ //close rs //close prestatm //close connection } } In my case, the args is always null, so it just passes a query sql to this query method. Is that possible this way could slow down the DB query after program long time running? Or I should use statement instead, or just pass args with "?" in the SQL? How can I find out the root cause for my issue? Thanks.

    Read the article

  • PL-SQL - Two statements with begin and end, run fine seperately but not together?

    - by Twiss
    Hi all, Just wondering if anyone can help with this, I have two PLSQL statements for altering tables (adding extra fields) and they are as follows: -- Make GC_NAB field for Next Action By Dropdown begin if 'VARCHAR2' = 'NUMBER' and length('VARCHAR2')>0 and length('')>0 then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2(10, ))'; elsif ('VARCHAR2' = 'NUMBER' and length('VARCHAR2')>0 and length('')=0) or 'VARCHAR2' = 'VARCHAR2' then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2(10))'; else execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2)'; end if; commit; end; -- Make GC_NABID field for Next Action By Dropdown begin if 'NUMBER' = 'NUMBER' and length('NUMBER')>0 and length('')>0 then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER(, ))'; elsif ('NUMBER' = 'NUMBER' and length('NUMBER')>0 and length('')=0) or 'NUMBER' = 'VARCHAR2' then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER())'; else execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER)'; end if; commit; end; When I run these two queries seperately, no problems. However, when run together as shown above, Oracle gives me an error when it starts the second statement: Error report: ORA-06550: line 15, column 1: PLS-00103: Encountered the symbol "BEGIN" 06550. 00000 - "line %s, column %s:\n%s" *Cause: Usually a PL/SQL compilation error. *Action: I'm assuming that this means the first statement is not terminated properly... is there anything I should put inbetween the statements to make it work properly? Thanks in advance everyone!

    Read the article

  • How to duplicate all data in a table except for a single column that should be changed.

    - by twiga
    I have a question regarding a unified insert query against tables with different data structures (Oracle). Let me elaborate with an example: tb_customers ( id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3) ) tb_suppliers ( id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx, archive_id NUMBER(3) ) The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key. My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id. One solution (that works), is to iterate over all the tables in a stored procedure and do a: CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE); UPDATE temp SET archive_id=something; INSERT INTO temp (select * from temp); DROP TABLE temp; I do not like this solution very much as the DDL commands muck up all restore points. Does anyone else have any solution?

    Read the article

  • How Best to Replace Ugly Queries and Dynamic PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and sometimes requiring dynamic SQL. That is, sometimes SQL queries require set based processing along with significant procedural processing. This is what PL/SQL is custom made for, but as a language it does not begin to compare to C#. Now and then I convert a PL/SQL procedure to C#, and am always amazed at how much cleaner and easier to both read and write the C# version is. The C# program might for example construct a SQL query string piece by piece and/or run several queries and process them as needed. The C# version is usually much faster as well, which must mean that I'm not very good at PL/SQL either. I do not currently have access to LINQ. My question is, how best to package all these little C# programs, which are really just mini reports, that is, replacements for ugly SQL queries? Right now I'm actually using NUnit to hold them, and calling each report a [Test], even though they aren't really tests. NUnit just happens to provide a convenient packaging framework.

    Read the article

  • How to call stored procedure by hibernate?

    - by user367097
    Hi I have an oracle stored procedure GET_VENDOR_STATUS_COUNT(DOCUMENT_ID IN NUMBER , NOT_INVITED OUT NUMBER,INVITE_WITHDRAWN OUT NUMBER,... rest all parameters are OUT parameters. In hbm file I have written - <sql-query name="getVendorStatus" callable="true"> <return-scalar column="NOT_INVITED" type="string"/> <return-scalar column="INVITE_WITHDRAWN" type="string"/> <return-scalar column="INVITED" type="string"/> <return-scalar column="DISQUALIFIED" type="string"/> <return-scalar column="RESPONSE_AWAITED" type="string"/> <return-scalar column="RESPONSE_IN_PROGRESS" type="string"/> <return-scalar column="RESPONSE_RECEIVED" type="string"/> { call GET_VENDOR_STATUS_COUNT(:DOCUMENT_ID , :NOT_INVITED ,:INVITE_WITHDRAWN ,:INVITED ,:DISQUALIFIED ,:RESPONSE_AWAITED ,:RESPONSE_IN_PROGRESS ,:RESPONSE_RECEIVED ) } </sql-query> In java I have written - session.getNamedQuery("getVendorStatus").setParameter("DOCUMENT_ID", "DOCUMENT_ID").setParameter("NOT_INVITED", "NOT_INVITED") ... continue till all the parametes . I am getting the sql exception 18:29:33,056 WARN [JDBCExceptionReporter] SQL Error: 1006, SQLState: 72000 18:29:33,056 ERROR [JDBCExceptionReporter] ORA-01006: bind variable does not exist Please let me know what is the exact process of calling a stored procedure from hibernate. I do not want to use JDBC callable statement.

    Read the article

  • Return REF CURSOR to procedure generated data

    - by ThaDon
    I need to write a sproc which performs some INSERTs on a table, and compile a list of "statuses" for each row based on how well the INSERT went. Each row will be inserted within a loop, the loop iterates over a cursor that supplies some values for the INSERT statement. What I need to return is a resultset which looks like this: FIELDS_FROM_ROW_BEING_INSERTED.., STATUS VARCHAR2 The STATUS is determined by how the INSERT went. For instance, if the INSERT caused a DUP_VAL_ON_INDEX exception indicating there was a duplicate row, I'd set the STATUS to "Dupe". If all went well, I'd set it to "SUCCESS" and proceed to the next row. By the end of it all, I'd have a resultset of N rows, where N is the number of insert statements performed and each row contains some identifying info for the row being inserted, along with the "STATUS" of the insertion Since there is no table in my DB to store the values I'd like to pass back to the user, I'm wondering how I can return the info back? Temporary table? Seems in Oracle temporary tables are "global", not sure I would want a global table, are there any temporary tables that get dropped after a session is done?

    Read the article

  • Cannot open database requested by the login. The login failed. Login failed for user

    - by Cipher
    Hi, I have copied a DB from one my computers and using it here. On trying to open the page which requires the fetching content from DB, on con.open I am getting this exception: Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\DATA\cakephp.mdf". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\DATA\cakephp_log.LDF". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Cannot open database "cakephp" requested by the login. The login failed. Login failed for user 'Sarin-PC\Sarin'. I have attached the database from Management Studio Express 2008 and I have also checcked the connection string. Here it is: <connectionStrings> <add name="cn" connectionString="server=.\sqlexpress;database=cakephp;integrated security=true;uid=sarin;pwd=******"/> </connectionStrings> In Visual Studio, when I test the connection, it says "Test connection succeeded". However, there is one strange thing going on. When I login to the Management Studio, there is no + sign with the newly attached database, as shown. If the full WebConfig is reqiured to be seen, I have pasted it here: http://pastebin.com/sVAuN0Ug

    Read the article

  • Timestamps and Intervals: NUMTOYMINTERVAL SYSTDATE CALCULATION SQL QUERY

    - by MeachamRob
    I am working on a homework problem, I'm close but need some help with a data conversion I think. Or sysdate - start_date calculation The question is: Using the EX schema, write a SELECT statement that retrieves the date_id and start_date from the Date_Sample table (format below), followed by a column named Years_and_Months_Since_Start that uses an interval function to retrieve the number of years and months that have elapsed between the start_date and the sysdate. (Your values will vary based on the date you do this lab.) Display only the records with start dates having the month and day equal to Feb 28 (of any year). DATE_ID START_DATE YEARS_AND_MONTHS_SINCE_START 2 Sunday , February 28, 1999 13-8 4 Monday , February 28, 2005 7-8 5 Tuesday , February 28, 2006 6-8 Our EX schema that refers to this question is simply a Date_Sample Table with two columns: DATE_ID NUMBER NOT Null START_DATE DATE I Have written this code: SELECT date_id, TO_CHAR(start_date, 'Day, MONTH DD, YYYY') AS start_date , NUMTOYMINTERVAL((SYSDATE - start_date), 'YEAR') AS years_and_months_since_start FROM date_sample WHERE TO_CHAR(start_date, 'MM/DD') = '02/28'; But my Years and months since start column is not working properly. It's getting very high numbers for years and months when the date calculated is from 1999-ish. ie, it should be 13-8 and I'm getting 5027-2 so I know it's not correct. I used NUMTOYMINTERVAL, which should be correct, but don't think the sysdate-start_date is working. Data Type for start_date is simply date. I tried ROUND but maybe need some help to get it right. Something is wrong with my calculation and trying to figure out how to get the correct interval there. Not sure if I have provided enough information to everyone but I will let you know if I figure it out before you do. It's a question from Murach's Oracle and SQL/PL book, chapter 17 if anyone else is trying to learn that chapter. Page 559.

    Read the article

  • Cannot locate record in delphi ADO query

    - by Danatela
    I can't locate any record in TADOQuery using PK. First, I was trying to use standard Locate method: PPUQuery.Locate('ID', SpPlansQuery['PPONREC'], []); It always returns False, but manual search (passing the whole query matching ID with given PPONREC which is really slow) finds the desired row. I tried using loPartialKey and switched CursorLocation of query to clUseServer, but it didn't help. Next, I tried to filter my PPUQuery: PPUQuery.Filter := 'ID = ' + VarToStr(SpPlansQuery['PPONREC']); PPUQuery.Filtered := True; PPUQuery.First; But after that the PPUQuery.Eof is True and PPUQuery.RecordCount equals 0. Underlying database is Oracle 9 and the ID is of type INTEGER and is PK of table TPORDER_CMK. PPUQuery.SQL is: SELECT tp.*, la.*, lm.*, ld.*, ld1.*, to_cmk.* FROM ppu_plan.tporder_cmk tp JOIN PPU_PLAN.LARTICLES la ON TP.ARTICLE = LA.ID JOIN PPU_PLAN.LMATERIAL lm ON TP.MATERIAL = lm.id JOIN PPU_PLAN.LCADEP ld ON TP.CADEP = LD.ID JOIN PPU_PLAN.LCADEP ld1 ON TP.PRODUCER = LD1.ID JOIN PPU_PLAN.TORDER_CMK to_cmk ON TP.order_id=TO_cmk.ID WHERE TP.PLAN_ID = :pplan_id What should I try next and how to solve this problem?

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • postfix: Temporary lookup failure

    - by mk_89
    I have followed the tutorials step by step for installing and configuring postfix https://help.ubuntu.com/community/Postfix https://help.ubuntu.com/community/PostfixBasicSetupHowto I am trying to test the services to whether Temporary lookup failure error telnet localhost 25 250 2.1.0 Ok rcpt to: fmaster@localhost 451 4.3.0 <fmaster@localhost>: Temporary lookup failure rcpt to: info@localhost 451 4.3.0 <info@localhost>: Temporary lookup failure I have tried searching the web but I have found no solutions, why am I getting this problem? mail.log Sep 24 01:03:05 bookcdb postfix/smtpd[21055]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <info@localhost>: Temporary lookup failure; from=<root@localhost> to=<info@localhost> proto=ESMTP helo=<localhost> Sep 24 01:03:19 bookcdb postfix/smtpd[21055]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <root@localhost>: Temporary lookup failure; from=<root@localhost> to=<root@localhost> proto=ESMTP helo=<localhost> Sep 24 01:08:19 bookcdb postfix/smtpd[21055]: timeout after RCPT from unknown[::1] Sep 24 01:08:19 bookcdb postfix/smtpd[21055]: disconnect from unknown[::1] Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:00:49 Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:00:49 Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max cache size 1 at Sep 24 01:00:49 Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: connect from unknown[::1] Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: transport_maps lookup failure Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: transport_maps lookup failure Sep 24 01:15:59 bookcdb postfix/smtpd[22175]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@localhost> to=<fmaster@localhost> proto=ESMTP helo=<localhost> Sep 24 01:16:30 postfix/smtpd[22175]: last message repeated 5 times Sep 24 01:16:30 bookcdb postfix/smtpd[22175]: disconnect from unknown[::1] Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:15:36 Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:15:36 Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max cache size 1 at Sep 24 01:15:36 Sep 24 01:20:32 bookcdb postfix/qmgr[21039]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 01:20:32 bookcdb postfix/qmgr[21039]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 01:20:32 bookcdb postfix/error[22402]: D0C596E0B34: to=<[email protected]>, relay=none, delay=5369, delays=5369/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: connect from unknown[::1] Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:25:14 bookcdb postfix/smtpd[22573]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<root@localhost> to=<[email protected]> proto=ESMTP helo=<localhost> Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 01:25:32 bookcdb postfix/error[22631]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=30203, delays=30203/0.01/0/0.1, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:25:32 bookcdb postfix/error[22632]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=30115, delays=30115/0.01/0/0.11, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:25:58 bookcdb postfix/smtpd[22573]: warning: non-SMTP command from unknown[::1]: subject: fdf Sep 24 01:25:58 bookcdb postfix/smtpd[22573]: disconnect from unknown[::1] Sep 24 01:26:01 bookcdb postfix/smtpd[22573]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:26:01 bookcdb postfix/smtpd[22573]: connect from unknown[::1] Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "root@locahost" Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:26:37 bookcdb postfix/smtpd[22573]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@locahost> to=<fmaster@localhost> proto=SMTP Sep 24 01:26:45 bookcdb postfix/smtpd[22573]: disconnect from unknown[::1] Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:24:16 Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:24:16 Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max cache size 1 at Sep 24 01:24:16 Sep 24 01:34:57 bookcdb dovecot: master: Dovecot v2.0.19 starting up (core dumps disabled) Sep 24 01:35:02 bookcdb amavis[1009]: starting. /usr/sbin/amavisd-new at mail.bookcdb.com amavisd-new-2.6.5 (20110407), Unicode aware Sep 24 01:35:02 bookcdb amavis[1009]: Perl version 5.014002 Sep 24 01:35:05 bookcdb amavis[1155]: Net::Server: Group Not Defined. Defaulting to EGID '114 114' Sep 24 01:35:05 bookcdb amavis[1155]: Net::Server: User Not Defined. Defaulting to EUID '108' Sep 24 01:35:05 bookcdb amavis[1155]: Module Amavis::Conf 2.208 Sep 24 01:35:05 bookcdb amavis[1155]: Module Archive::Zip 1.30 Sep 24 01:35:05 bookcdb amavis[1155]: Module BerkeleyDB 0.49 Sep 24 01:35:05 bookcdb amavis[1155]: Module Compress::Zlib 2.033 Sep 24 01:35:05 bookcdb amavis[1155]: Module Convert::TNEF 0.17 Sep 24 01:35:05 bookcdb amavis[1155]: Module Convert::UUlib 1.4 Sep 24 01:35:05 bookcdb amavis[1155]: Module Crypt::OpenSSL::RSA 0.27 Sep 24 01:35:05 bookcdb amavis[1155]: Module DB_File 1.821 Sep 24 01:35:05 bookcdb amavis[1155]: Module Digest::MD5 2.51 Sep 24 01:35:05 bookcdb amavis[1155]: Module Digest::SHA 5.61 Sep 24 01:35:05 bookcdb amavis[1155]: Module IO::Socket::INET6 2.69 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Entity 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Parser 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Tools 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::DKIM::Signer 0.39 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::DKIM::Verifier 0.39 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::Header 2.08 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::Internet 2.08 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::SPF v2.008 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::SpamAssassin 3.003002 Sep 24 01:35:05 bookcdb amavis[1155]: Module Net::DNS 0.66 Sep 24 01:35:05 bookcdb amavis[1155]: Module Net::Server 0.99 Sep 24 01:35:05 bookcdb amavis[1155]: Module NetAddr::IP 4.058 Sep 24 01:35:05 bookcdb amavis[1155]: Module Socket6 0.23 Sep 24 01:35:05 bookcdb amavis[1155]: Module Time::HiRes 1.972101 Sep 24 01:35:05 bookcdb amavis[1155]: Module URI 1.59 Sep 24 01:35:05 bookcdb amavis[1155]: Module Unix::Syslog 1.1 Sep 24 01:35:05 bookcdb amavis[1155]: Amavis::DB code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Amavis::Cache code loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL base code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL::Log code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL::Quarantine NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Lookup::SQL code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Lookup::LDAP code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: AM.PDP-in proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: SMTP-in proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Courier proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SMTP-out proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Pipe-out proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: BSMTP-out proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Local-out proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: OS_Fingerprint code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-VIRUS code loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM code loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-EXT code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-C code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-SA code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Unpackers code loaded Sep 24 01:35:05 bookcdb amavis[1155]: DKIM code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Tools code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Found $file at /usr/bin/file Sep 24 01:35:05 bookcdb amavis[1155]: No $altermime, not using it Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .mail Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .F Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .Z at /bin/uncompress Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .gz Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .bz2 at /bin/bzip2 -d Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .lzo tried: lzop -d Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .rpm tried: rpm2cpio.pl, rpm2cpio Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .cpio at /bin/pax Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .tar at /bin/pax Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .deb at /usr/bin/ar Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .zip Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .7z tried: 7zr, 7za, 7z Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .rar tried: unrar-free Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .arj tried: arj, unarj Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .arc tried: nomarch, arc Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .zoo tried: zoo Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .lha Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .doc tried: ripole Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .cab tried: cabextract Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .tnef Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .tnef Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .exe tried: unrar-free; arj, unarj Sep 24 01:35:05 bookcdb amavis[1155]: Using primary internal av scanner code for ClamAV-clamd Sep 24 01:35:05 bookcdb amavis[1155]: Found secondary av scanner ClamAV-clamscan at /usr/bin/clamscan Sep 24 01:35:05 bookcdb amavis[1155]: Creating db in /var/lib/amavis/db/; BerkeleyDB 0.49, libdb 5.1 Sep 24 01:35:05 bookcdb postgrey[1219]: Process Backgrounded Sep 24 01:35:05 bookcdb postgrey[1219]: 2012/09/24-01:35:05 postgrey (type Net::Server::Multiplex) starting! pid(1219) Sep 24 01:35:05 bookcdb postgrey[1219]: Using default listen value of 128 Sep 24 01:35:05 bookcdb postgrey[1219]: Binding to TCP port 10023 on host localhost#012 Sep 24 01:35:05 bookcdb postgrey[1219]: Setting gid to "116 116" Sep 24 01:35:05 bookcdb postgrey[1219]: Setting uid to "110" Sep 24 01:35:06 bookcdb spamd[1231]: logger: removing stderr method Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server started on port 783/tcp (running version 3.3.2) Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server pid: 1233 Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server successfully spawned child process, pid 1238 Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server successfully spawned child process, pid 1240 Sep 24 01:35:08 bookcdb spamd[1233]: prefork: child states: SI Sep 24 01:35:08 bookcdb spamd[1233]: prefork: child states: II Sep 24 01:35:15 bookcdb postfix/master[1729]: daemon started -- version 2.9.3, configuration /etc/postfix Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: connect from unknown[::1] Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:37:00 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:37:28 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2730, secured Sep 24 01:37:28 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=44/697 Sep 24 01:37:29 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2732, secured Sep 24 01:37:29 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=464/1303 Sep 24 01:37:29 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2734, secured Sep 24 01:37:29 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=117/1395 Sep 24 01:37:31 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2737, secured Sep 24 01:37:31 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=117/1395 Sep 24 01:37:49 bookcdb dovecot: imap-login: Login: user=<root>, method=PLAIN, rip=::1, lip=::1, mpid=2739, secured Sep 24 01:37:49 bookcdb dovecot: imap: Error: user root: Invalid settings in userdb: userdb returned 0 as uid Sep 24 01:37:49 bookcdb dovecot: imap: Error: Invalid user settings. Refer to server log for more information. Sep 24 01:37:54 bookcdb dovecot: imap-login: Login: user=<root>, method=PLAIN, rip=::1, lip=::1, mpid=2741, secured Sep 24 01:37:54 bookcdb dovecot: imap: Error: user root: Invalid settings in userdb: userdb returned 0 as uid Sep 24 01:37:54 bookcdb dovecot: imap: Error: Invalid user settings. Refer to server log for more information. Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2743, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=44/697 Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2745, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=464/1303 Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2747, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=117/1395 Sep 24 01:38:55 bookcdb postfix/smtpd[1995]: disconnect from unknown[::1] Sep 24 01:38:58 bookcdb postfix/smtpd[1995]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:38:58 bookcdb postfix/smtpd[1995]: connect from unknown[::1] Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "info@localhost" Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:39:37 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<info@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:39:47 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<info@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:40:13 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <info@localhost>: Temporary lookup failure; from=<info@localhost> to=<info@localhost> proto=SMTP Sep 24 01:43:08 bookcdb postfix/smtpd[1995]: disconnect from unknown[::1] Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:36:08 Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:36:08 Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max cache size 1 at Sep 24 01:36:08 Sep 24 01:48:05 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2805, secured Sep 24 01:48:05 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=85/681 Sep 24 01:51:30 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2815, secured Sep 24 01:51:30 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=117/1395 Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: 2EA006E0B32: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: E76996E09B2: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 02:05:15 bookcdb postfix/error[2842]: 2EA006E0B32: to=<[email protected]>, relay=none, delay=8391, delays=8391/0.05/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:05:16 bookcdb postfix/error[2843]: E76996E09B2: to=<[email protected]>, relay=none, delay=8416, delays=8416/0.03/0/0.11, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:30:15 bookcdb postfix/qmgr[1745]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 02:30:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:30:15 bookcdb postfix/error[2914]: D0C596E0B34: to=<[email protected]>, relay=none, delay=9551, delays=9551/0.01/0/0.08, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 02:35:15 bookcdb postfix/error[2926]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=34386, delays=34386/0.03/0/0.1, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:35:15 bookcdb postfix/error[2927]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=34299, delays=34298/0.02/0/0.12, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: 2EA006E0B32: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: E76996E09B2: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 03:15:15 bookcdb postfix/error[3025]: 2EA006E0B32: to=<[email protected]>, relay=none, delay=12590, delays=12590/0.01/0/0.07, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:15:15 bookcdb postfix/error[3026]: E76996E09B2: to=<[email protected]>, relay=none, delay=12616, delays=12616/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:40:15 bookcdb postfix/qmgr[1745]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 03:40:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:40:15 bookcdb postfix/error[3097]: D0C596E0B34: to=<[email protected]>, relay=none, delay=13752, delays=13752/0.01/0/0.07, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 03:45:15 bookcdb postfix/error[3129]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=38586, delays=38586/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:45:15 bookcdb postfix/error[3130]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=38498, delays=38498/0.01/0/0.08, dsn=4.3.0, status=deferred (mail transport unavailable) postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = server1.bookcdb.com, bookcdb.com, localhost.bookcdb.com, localho st, bookcdb.com myhostname = server1.bookcdb.com mynetworks = 127.0.0.0/8 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relay_domains = lists.bookcdb.com relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,rejec t_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • CVE-2006-3744 Multiple Integer overflow vulnerabilities in ImageMagick

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2006-3744 Numeric Errors vulnerability 5.1 ImageMagick Solaris 10 SPARC: 136882-03 X86: 136883-03 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Older SAS1 hardware Vs. newer SAS2 hardware

    - by user12620172
    I got a question today from someone asking about the older SAS1 hardware from over a year ago that we had on the older 7x10 series. They didn't leave an email so I couldn't respond directly, but I said this blog would be blunt, frank, and open so I have no problem addressing it publicly. A quick history lesson here: When Sun first put out the 7x10 family hardware, the 7410 and 7310 used a SAS1 backend connection to a JBOD that had SATA drives in it. This JBOD was not manufactured by Sun nor did Sun own the IP for it. Now, when Oracle took over, they had a problem with that, and I really can’t blame them. The decision was made to cut off that JBOD and it’s manufacturer completely and use our own where Oracle controlled both the IP and the manufacturing. So in the summer of 2010, the cut was made, and the 7410 and 7310 had a hardware refresh and now had a SAS2 backend going to a SAS2 JBOD with SAS2 drives instead of SATA. This new hardware had two big advantages. First, there was a nice performance increase, mostly due to the faster backend. Even better, the SAS2 interface on the drives allowed for a MUCH faster failover between cluster heads, as the SATA drives were the bottleneck on the older hardware. In September of 2010 there was a major refresh of the rest of the 7000 hardware, the controllers and the other family members, and that’s where we got today’s current line-up of the 7x20 series. So the 7x20 has always used the new trays, and the 7410 and 7310 have used the new SAS2 trays since last July of 2010. Now for the bad news. People who have the 7410 and 7310 from BEFORE the July 2010 cutoff have the models with SAS1 HBAs in them to connect to the older SAS1 trays. Remember, that manufacturer cut all ties with us and stopped making the JBOD, so there’s just no way to get more of them, as they don’t exist. There are some options, however. Oracle support does support taking out the SAS1 HBAs in the old 7410 and 7310 and put in newer SAS2 HBAs which can talk to the new trays. Hey, I didn’t say it was a great option, I just said it’s an option. I fully realize that you would then have a SAS1 JBOD full of SATA drives that you could no longer connect. I do know a client that did this, and took the SAS1 JBOD and connected it to another server and formatted the drives and is using it as a plain, non-7000 JBOD. This is not supported by Oracle support. The other option is to just keep it as-is, as it works just fine, but you just can’t expand it. Then you can get a newer 7x20 series, and use the built-in ZFSSA replication feature to move the data over. Now you can use the newer one for your production data and use the older one for DR, snaps and clones.

    Read the article

  • BPEL 11.1.1.2 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations

    - by Steven Chan
    A new certification was released simultaneously with the E-Business Suite 12.1.3 Maintenance Pack late last year:  the use of BPEL 11g Version 11.1.1.2 with E-Business Suite 12.1.3.  There are two major options for SOA-related integrations for the E-Business Suite:Custom integrations using the Oracle Application Server (SOA) Adapter for Oracle ApplicationsPrebuilt SOA integrations for E-Business Suite using BPEL Process ManagerFor more background about these two options, please see this article:BPEL 10.1.3.5 Certified for Prebuilt E-Business Suite 12 SOA Integrations

    Read the article

  • How do I reduce the size of mlocate database?

    - by MountainX
    I'm out of space on /var 25G 25G 0 100% /var It looks like mlocate.db is the problem: # find . -printf '%s %p\n' | sort -nr | head 13140140032 ./lib/mlocate/mlocate.db.cgLMAM 12409839616 ./lib/mlocate/mlocate.db.MqGeqe cat /etc/updatedb.conf PRUNE_BIND_MOUNTS="yes" PRUNENAMES=".git .bzr .hg .svn" PRUNEPATHS="/tmp /var/spool /media" PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre_lite tmpfs usbfs udf" I don't see anything else to prune. So how can I fix this? Thanks

    Read the article

  • JMX Based Monitoring - Part Two - JVM Monitoring

    - by Anthony Shorten
    This the second article in the series focussing on the JMX based monitoring capabilities possible with the Oracle Utilities Application Framework. In all versions of the Oracle utilities Application Framework, it is possible to use the basic JMX based monitoring available with the Java Virtual Machine to provide basic statistics ablut the JVM. In Java 5 and above, the JVM automatically allowed local monitoring of the JVM statistics from an approporiate console. When I say local I mean the monitoring tool must be executed from the same machine (and in some cases the same user that is running the JVM) to connect to the JVM directly. If you are using jconsole, for example, then you must have access to a GUI (X-Windows or Windows) to display the jconsole output. This is the easist way of monitoring without doing too much configration but is not always practical. Java offers a remote monitorig capability to allow yo to connect to a remotely executing JVM from a console (like jconsole). To use this facility additional JVM options must be added to the command line that started the JVM. Details of the additional options for the version of the Java you are running is located at the JMX information site. Typically to remotely connect to a running JVM that JVM must be configured with the following categories of options: JMX Port - The JVM must allow connections on a listening port specified on the command line Connection security - The connection to the JVM can be secured. This is recommended as JMX is not just a monitoring protocol it is a managemet protocol. It is possible to change values in a running JVM using JMX and there are NO "Are you sure?" safeguards. For a Oracle Utilities Application Framework based application there are a few guidelines when configuring and using this JMX based remote monitoring of the JVM's: Online JVM - The JVM used to run the online system is embedded within the J2EE Web Application Server. To enable JMX monitoring on this JVM you can either change the startup script that starts the Web Application Server or check whether your J2EE Web Application natively supports JVM statistics collection. Child JVM's (COBOL only) - The Child JVM's should not be monitored using this method as they are recycled regularly by the configuration and therefore statistics collected are of little value. Batch Threadpoools - Batch already has a JMX interface (which will be covered in another article). Additional monitoring can be enabled but the base supported monitoring is sufficient for most needs. If you are an Oracle Utilities Application Framework site, then you can specify the additional options for JMX Java monitoring on the OPTS paramaters supported for each component of the architecture. Just ensure the port numbers used are unique for each JVM running on any machine.

    Read the article

  • ODI 12c - Parallel Table Load

    - by David Allan
    In this post we will look at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints. This is about parallel table loading which isn't to be confused with loading multiple targets per se. It supports the ability for ODI mappings to be executed concurrently especially if there is an overlap of the datastores that they access, so any temporary resources created may be uniquely constructed by ODI. Temporary objects can be anything basically - common examples are staging tables, indexes, views, directories - anything in the ETL to help the data integration flow do its job. In ODI 11g users found a few workarounds (such as changing the technology prefixes - see here) to build unique temporary names but it was more of a challenge in error cases. ODI 12c mappings by default operate exactly as they did in ODI 11g with respect to these temporary names (this is also true for upgraded interfaces and scenarios) but can be configured to support the uniqueness capabilities. We will look at this feature from two aspects; that of a mapping developer and that of a developer (of procedures or KMs). 1. Firstly as a Mapping Developer..... 1.1 Control when uniqueness is enabled A new property is available to set unique name generation on/off. When unique names have been enabled for a mapping, all temporary names used by the collection and integration objects will be generated using unique names. This property is presented as a check-box in the Property Inspector for a deployment specification. 1.2 Handle cleanup after successful execution Provided that all temporary objects that are created have a corresponding drop statement then all of the temporary objects should be removed during a successful execution. This should be the case with the KMs developed by Oracle. 1.3 Handle cleanup after unsuccessful execution If an execution failed in ODI 11g then temporary tables would have been left around and cleaned up in the subsequent run. In ODI 12c, KM tasks can now have a cleanup-type task which is executed even after a failure in the main tasks. These cleanup tasks will be executed even on failure if the property 'Remove Temporary Objects on Error' is set. If the agent was to crash and not be able to execute this task, then there is an ODI tool (OdiRemoveTemporaryObjects here) you can invoke to cleanup the tables - it supports date ranges and the like. That's all there is to it from the aspect of the mapping developer it's much, much simpler and straightforward. You can now execute the same mapping concurrently or execute many mappings using the same resource concurrently without worrying about conflict.  2. Secondly as a Procedure or KM Developer..... In the ODI Operator the executed code shows the actual name that is generated - you can also see the runtime code prior to execution (introduced in 11.1.1.7), for example below in the code type I selected 'Pre-executed Code' this lets you see the code about to be processed and you can also see the executed code (which is the default view). References to the collection (C$) and integration (I$) names will be automatically made unique by using the odiRef APIs - these objects will have unique names whenever concurrency has been enabled for a particular mapping deployment specification. It's also possible to use name uniqueness functions in procedures and your own KMs. 2.1 New uniqueness tags  You can also make your own temporary objects have unique names by explicitly including either %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG in the name passed to calls to the odiRef APIs. Such names would always include the unique tag regardless of the concurrency setting. To illustrate, let's look at the getObjectName() method. At <% expansion time, this API will append %UNIQUE_STEP_TAG to the object name for collection and integration tables. The name parameter passed to this API may contain  %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. This API always generates to the <? version of getObjectName() At execution time this API will replace the unique tag macros with a string that is unique to the current execution scope. The returned name will conform to the name-length restriction for the target technology, and its pattern for the unique tag. Any necessary truncation will be performed against the initial name for the object and any other fixed text that may have been specified. Examples are:- <?=odiRef.getObjectName("L", "%COL_PRFEMP%UNIQUE_STEP_TAG", "D")?> SCOTT.C$_EABH7QI1BR1EQI3M76PG9SIMBQQ <?=odiRef.getObjectName("L", "EMP%UNIQUE_STEP_TAG_AE", "D")?> SCOTT.EMPAO96Q2JEKO0FTHQP77TMSAIOSR_ Methods which have this kind of support include getFrom, getTableName, getTable, getObjectShortName and getTemporaryIndex. There are APIs for retrieving this tag info also, the getInfo API has been extended with the following properties (the UNIQUE* properties can also be used in ODI procedures); UNIQUE_STEP_TAG - Returns the unique value for the current step scope, e.g. 5rvmd8hOIy7OU2o1FhsF61 Note that this will be a different value for each loop-iteration when the step is in a loop. UNIQUE_SESSION_TAG - Returns the unique value for the current session scope, e.g. 6N38vXLrgjwUwT5MseHHY9 IS_CONCURRENT - Returns info about the current mapping, will return 0 or 1 (only in % phase) GUID_SRC_SET - Returns the UUID for the current source set/execution unit (only in % phase) The getPop API has been extended with the IS_CONCURRENT property which returns info about an mapping, will return 0 or 1.  2.2 Additional APIs Some new APIs are provided including getFormattedName which will allow KM developers to construct a name from fixed-text or ODI symbols that can be optionally truncate to a max length and use a specific encoding for the unique tag. It has syntax getFormattedName(String pName[, String pTechnologyCode]) This API is available at both the % and the ? phase.  The format string can contain the ODI prefixes that are available for getObjectName(), e.g. %INT_PRF, %COL_PRF, %ERR_PRF, %IDX_PRF alongwith %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. The latter tags will be expanded into a unique string according to the specified technology. Calls to this API within the same execution context are guaranteed to return the same unique name provided that the same parameters are passed to the call. e.g. <%=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")%> <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")?> C$_MY_TAB7wDiBe80vBog1auacS1xB_AE <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG.log", "FILE")?> C2_MY_TAB7wDiBe80vBog1auacS1xB.log 2.3 Name length generation  As part of name generation, the length of the generated name will be compared with the maximum length for the target technology and truncation may need to be applied. When a unique tag is included in the generated string it is important that uniqueness is not compromised by truncation of the unique tag. When a unique tag is NOT part of the generated name, the name will be truncated by removing characters from the end - this is the existing 11g algorithm. When a unique tag is included, the algorithm will first truncate the <postfix> and if necessary  the <prefix>. It is recommended that users will ensure there is sufficient uniqueness in the <prefix> section to ensure uniqueness of the final resultant name. SUMMARY To summarize, ODI 12c make it much simpler to utilize mappings in concurrent cases and provides APIs for helping developing any procedures or custom knowledge modules in such a way they can be used in highly concurrent, parallel scenarios. 

    Read the article

  • SIM to OIM Migration: A How-to Guide to Avoid Costly Mistakes (SDG Corporation)

    - by Darin Pendergraft
    In the fall of 2012, Oracle launched a major upgrade to its IDM portfolio: the 11gR2 release.  11gR2 had four major focus areas: More simplified and customizable user experience Support for cloud, mobile, and social applications Extreme scalability Clear upgrade path For SUN migration customers, it is critical to develop and execute a clearly defined plan prior to beginning this process.  The plan should include initiation and discovery, assessment and analysis, future state architecture, review and collaboration, and gap analysis.  To help better understand your upgrade choices, SDG, an Oracle partner has developed a series of three whitepapers focused on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration. In the second of this series on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration, Santosh Kumar Singh from SDG  discusses the proper steps that should be taken during the planning-to-post implementation phases to ensure a smooth transition from SIM to OIM. Read the whitepaper for Part 2: Download Part 2 from SDGC.com In the last of this series of white papers, Santosh will talk about Identity and Access Management best practices and how these need to be considered when going through with an OIM migration. If you have not taken the opportunity, please read the first in this series which discusses the Migration Approach, Methodology, and Tools for you to consider when planning a migration from SIM to OIM. Read the white paper for part 1: Download Part 1 from SDGC.com About the Author: Santosh Kumar Singh Identity and Access Management (IAM) Practice Leader Santosh, in his capacity as SDG Identity and Access Management (IAM) Practice Leader, has direct senior management responsibility for the firm's strategy, planning, competency building, and engagement deliverance for this Practice. He brings over 12+ years of extensive IT, business, and project management and delivery experience, primarily within enterprise directory, single sign-on (SSO) application, and federated identity services, provisioning solutions, role and password management, and security audit and enterprise blueprint. Santosh possesses strong architecture and implementation expertise in all areas within these technologies and has repeatedly lead teams in successfully deploying complex technical solutions. About SDG: SDG Corporation empowers forward thinking companies to strategize their future, realize their vision, and minimize their IT risk. SDG distinguishes itself by offering flexible business models to fit their clients’ needs; faster time-to-market with its pre-built solutions and frameworks; a broad-based foundation of domain experts, and deep program management expertise. (www.sdgc.com)

    Read the article

  • JD Edwards World Reporting Made Easy with Real Time Reporting Tools from The GL Company

    Fred talks to Paul Yarwood, US Operations General Manager and Richard Crotty, North America Business Development Manager for The GL Company, an Oracle Certified Partner, and Denise Grills, Senior Director of Marketing and Product Strategy for Oracle's JD Edwards World products. They discuss how the finance department of JD Edwards World customers can have complete control over their management reporting with a true inquiry, consolidation, and reporting solution from The GL Company, freeing up the finance team from being dependent upon IT time and resources.

    Read the article

  • How does one check if a table exists in an Android SQLite database?

    - by camperdave
    I have an android app that needs to check if there's already a record in the database, and if not, process some things and eventually insert it, and simply read the data from the database if the data does exist. I'm using a subclass of SQLiteOpenHelper to create and get a rewritable instance of SQLiteDatabase, which I thought automatically took care of creating the table if it didn't already exist (since the code to do that is in the onCreate(...) method). However, when the table does NOT yet exist, and the first method ran upon the SQLiteDatabase object I have is a call to query(...), my logcat shows an error of "I/Database(26434): sqlite returned: error code = 1, msg = no such table: appdata", and sure enough, the appdata table isn't being created. Any ideas on why? I'm looking for either a method to test if the table exists (because if it doesn't, the data's certainly not in it, and I don't need to read it until I write to it, which seems to create the table properly), or a way to make sure that it gets created, and is just empty, in time for that first call to query(...)

    Read the article

  • BIWA Wednesday TechCast Series - Opposition to Data Warehouse Initiatives

    - by jenny.gelhausen
    BIWA Wednesday TechCast Series - 19th Event! Opposition to Data Warehouse Initiatives Please join us for this webcast on Wednesday, March 24, 12 noon Eastern or check your local area's time Webcast is open to clients, prospects and partners. No matter how good your technology and technical skills, organizational issues can derail a data warehousing or BI project. Therefore BIWA presents a vital topic that crosses product boundaries: organizational resistance to data warehouse initiatives - how to recognize it and what to do about it. Many a DW/BI professional has been surprised by organizational resistance to DW/BI initiatives. Yet real organizational imperatives may be behind this apparently irrational behavior. Based on in-depth interviews with IT professionals, industry consultants, and power users, our speaker Bruce Jenks will present his research findings about what drives organizational resistance to data warehouse initiatives. The talk will cover specific behaviors that can signal organizational resistance to a data warehouse program and what organizations have done to address such resistance. Presenter: Bruce Jenks of Dun and Bradstreet Bruce Jenks has over 20 years experience in data warehousing and business intelligence, much of it as a consultant to large organizations spanning the US. Bruce's data warehousing clients have included firms such as Sprint, Gallo Wines, Southern California Edison, The Gap, and Safeway. He started his data warehousing career at Metaphor Computers, a pioneering DW/BI firm from which a number of industry luminaries sprang including Ralph Kimball (author of The Data Warehouse Toolkit ). Bruce continued his data warehousing career at HP, Stanford University and other firms. Bruce is currently completing his doctorate in business administration at Golden Gate University, and today's material arises from his doctoral research. He is also a principal consultant for Dun and Bradstreet. Audio Dial-In: 866 682 4770 Audio Meeting ID: 1683901 Audio Meeting Passcode: 334451 Web Conference: Please register at https://www1.gotomeeting.com/register/807185273 After you register you will be provided with a link to the TechCast. Invitation to Speakers: All BIWA members and Oracle professionals (experts, end users, managers, DBAs, developers, data analysts, ISVs, partners, etc.) may submit abstracts for 45 minute technical webcasts to our Oracle BIWA (IOUG SIG) Community. Submit your BIWA TechCast abstract today! BIWA is a worldwide forum with over 2000 members who are business intelligence, warehousing and analytics professionals. BIWA presents information, experiences and best practices in successfully deploying Oracle Database-centric BI, Data Warehousing, and Analytics products, features and Options--the Oracle Database "BIWA" platform. Attendance Information & Replays at the BIWA website: oraclebiwa.org var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Enterprise Performance Management: Driving Management Excellence

    Extending operational excellence to management excellence is the new strategic imperative for organizations large and small, all around the world. Management Excellence is a strategy for organizations to differentiate from their competition, by being smarter, more agile and more aligned. Tune into this conversation with John Kopcke, Senior Vice President of Oracle’s Enterprise Performance Management Global Business Unit to learn how leading companies are integrating their management processes and using Oracle’s EPM System to achieve management excellence.

    Read the article

< Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >