Search Results

Search found 46416 results on 1857 pages for 'access log'.

Page 423/1857 | < Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >

  • User password rejected on login screen but accepted on text console login

    - by MadsirR
    I had to force shutdown my Ubuntu 12.04 64-bit, after which I restarted and tried to log in as a normal user, which was rejected several times. I then logged in as guest and tty to my regular account with use of my normal password, which succeeded. (So the password is still valid.) How can I gain access again via the normal login procedure (welcome screen)? Update: When I tried to log on with my new password, it again was denied. When I deliberately tried to log on with a faulty password, an error message came back, saying: Access denied - wrong password. I suppose, the first time the password was not rejected, but the procedure was aborted for some reason. Some additional info after trying to find a solution: I am conviced it is a Compiz-issue. Why? before this happened, all sessions came to a grinding halt, regardless of being logged on in a 2D or 3d environment. I found a link saying that I should remove Compiz and proceed in a 2D environment, which initiall worked without a glitch, until my system went into a state of total obivium. Only after that, the above mentioned troubles appeared. In the meantime I have happened to find a thread with reference 17381, describing exactly what I have experienced. For now, I will try to cure this situation (later this week) and revert with the results, hopefully to close this post. In the meantime I cordially thank you all, even if you didn't kill the problem; you gave me the inspiration to look further and find a possible cure. Update2: After 15 hrs of trial-and-error I callled it quits (When I decided to tackle this problem, I've given myself 12 hrs, to avoid massive loss of time.) I decided to re-install Precise, since the "point 1" version has become availabe. Log-in is back to normal, as is the graphic environment. Response to mouse input is stil appalling, especially when I have a series of screens open as "children"of a "parent" screen. It still completely locks up. I have installed Enlightenment, Gnome classic, Gnome 3, Cinnamon and they all behave in a similar fashion. FOR THOSE WHO NEED A WAY-OUT IN SITUATIONS OF THE LIKE: Open a terminal with [Ctrl+Alt+F2]. Type [sudo killall Firefox] (or whatever application you wish to terminate). Key in your password. Return to your graphical screen with [Ctrl+Alt+F7], and Bob's your uncle. Just re-open Firefox like nothing happened. Next time you are stuck: [Ctrl+Alt+F2], upward arrow till you meet the command of your desire, [Ctrl+Alt+F7], etcetera. Hope this is of help. My next move will be to upgrade the kernel to 3.4 from the repositories for 12.10. However, since this entails a totally new situation, I will start a new thread on this site to avoid topic pollution I will keep you posted. Still.

    Read the article

  • Working with Analytic Workflow Manager (AWM) - Part 8 Cube Metadata Analysis

    - by Mohan Ramanuja
    CUBE SIZEselect dbal.owner||'.'||substr(dbal.table_name,4) awname, sum(dbas.bytes)/1024/1024 as mb, dbas.tablespace_name from dba_lobs dbal, dba_segments dbas where dbal.column_name = 'AWLOB' and dbal.segment_name = dbas.segment_name group by dbal.owner, dbal.table_name, dbas.tablespace_name order by dbal.owner, dbal.table_name SESSION RESOURCES select vses.username||':'||vsst.sid username, vstt.name, max(vsst.value) valuefrom v$sesstat vsst, v$statname vstt, v$session vseswhere vstt.statistic# = vsst.statistic# and vsst.sid = vses.sid andVSES.USERNAME LIKE ('ATTRIBDW_OWN') ANDvstt.name in ('session pga memory', 'session pga memory max', 'session uga memory','session uga memory max', 'session cursor cache count', 'session cursor cache hits', 'session stored procedure space', 'opened cursors current', 'opened cursors cumulative') andvses.username is not null group by vsst.sid, vses.username, vstt.name order by vsst.sid, vses.username, vstt.name OLAP PGA USE select 'OLAP Pages Occupying: '|| round((((select sum(nvl(pool_size,1)) from v$aw_calc)) / (select value from v$pgastat where name = 'total PGA inuse')),2)*100||'%' info from dual union select 'Total PGA Inuse Size: '||value/1024||' KB' info from v$pgastat where name = 'total PGA inuse' union select 'Total OLAP Page Size: '|| round(sum(nvl(pool_size,1))/1024,0)||' KB' info from v$aw_calc order by info desc OLAP PGA USAGE PER USER select vs.username, vs.sid, round(pga_used_mem/1024/1024,2)||' MB' pga_used, round(pga_max_mem/1024/1024,2)||' MB' pga_max, round(pool_size/1024/1024,2)||' MB' olap_pp, round(100*(pool_hits-pool_misses)/pool_hits,2) || '%' olap_ratio from v$process vp, v$session vs, v$aw_calc va where session_id=vs.sid and addr = paddr CUBE LOADING SCRIPT REM The 'set define off' statement is needed only if running this script through SQLPlus.REM If you are using another tool to run this script, the line below may be commented out.set define offBEGIN  DBMS_CUBE.BUILD(    'VALIDATE  ATTRIBDW_OWN.CURRENCY USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNT USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.DATEDIM USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.CUSIP USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNTRETURN',    'CCCCC', -- refresh methodfalse, -- refresh after errors    0, -- parallelismtrue, -- atomic refreshtrue, -- automatic orderfalse); -- add dimensionsEND;/BEGIN  DBMS_CUBE.BUILD(    '  ATTRIBDW_OWN.CURRENCY USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNT USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.DATEDIM USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.CUSIP USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNTRETURN',    'CCCCC', -- refresh methodfalse, -- refresh after errors    0, -- parallelismtrue, -- atomic refreshtrue, -- automatic orderfalse); -- add dimensionsEND;/ VISUALIZATION OBJECT - AW$ATTRIBDW_OWN  CREATE TABLE "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"        (            "PS#"    NUMBER(10,0),            "GEN#"   NUMBER(10,0),            "EXTNUM" NUMBER(8,0),            "AWLOB" BLOB,            "OBJNAME"  VARCHAR2(256 BYTE),            "PARTNAME" VARCHAR2(256 BYTE)        )        PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE        (            BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "AWLOB"        )        STORE AS SECUREFILE        (            TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        )        PARTITION BY RANGE        (            "GEN#"        )        SUBPARTITION BY HASH        (            "PS#",            "EXTNUM"        )        SUBPARTITIONS 8        (            PARTITION "PTN1" VALUES LESS THAN (1) PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOB ("AWLOB") STORE AS SECUREFILE ( TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE READS LOGGING NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ( SUBPARTITION "SYS_SUBP661" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP662" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP663" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP664" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP665" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION            "SYS_SUBP666" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP667" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP668" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" ) ,            PARTITION "PTNN" VALUES LESS THAN (MAXVALUE) PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOB ("AWLOB") STORE AS SECUREFILE ( TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ( SUBPARTITION "SYS_SUBP669" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP670" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP671" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP672" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP673" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION            "SYS_SUBP674" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP675" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP676" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" )        ) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."ATTRIBDW_OWN_I$" ON "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"    (        "PS#", "GEN#", "EXTNUM"    )    PCTFREE 10 INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE    (        INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT    )    TABLESPACE "ATTRIBDW_DATA" ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000406980C00004$$" ON "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"    (        PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOCAL (PARTITION "SYS_IL_P711" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) ( SUBPARTITION "SYS_IL_SUBP695" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP696" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP697" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP698" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP699" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP700" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP701" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP702" TABLESPACE "ATTRIBDW_DATA" ) , PARTITION "SYS_IL_P712" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) ( SUBPARTITION "SYS_IL_SUBP703" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP704" TABLESPACE        "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP705" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP706" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP707" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP708" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP709" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP710" TABLESPACE "ATTRIBDW_DATA" ) ) PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE BUILD LOG  CREATE TABLE "ATTRIBDW_OWN"."CUBE_BUILD_LOG"        (            "BUILD_ID"          NUMBER,            "SLAVE_NUMBER"      NUMBER,            "STATUS"            VARCHAR2(10 BYTE),            "COMMAND"           VARCHAR2(25 BYTE),            "BUILD_OBJECT"      VARCHAR2(30 BYTE),            "BUILD_OBJECT_TYPE" VARCHAR2(10 BYTE),            "OUTPUT" CLOB,            "AW"            VARCHAR2(30 BYTE),            "OWNER"         VARCHAR2(30 BYTE),            "PARTITION"     VARCHAR2(50 BYTE),            "SCHEDULER_JOB" VARCHAR2(100 BYTE),            "TIME" TIMESTAMP (6)WITH TIME ZONE,        "BUILD_SCRIPT" CLOB,        "BUILD_TYPE"            VARCHAR2(22 BYTE),        "COMMAND_DEPTH"         NUMBER(2,0),        "BUILD_SUB_OBJECT"      VARCHAR2(30 BYTE),        "REFRESH_METHOD"        VARCHAR2(1 BYTE),        "SEQ_NUMBER"            NUMBER,        "COMMAND_NUMBER"        NUMBER,        "IN_BRANCH"             NUMBER(1,0),        "COMMAND_STATUS_NUMBER" NUMBER,        "BUILD_NAME"            VARCHAR2(100 BYTE)        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "OUTPUT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        )        LOB        (            "BUILD_SCRIPT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407294C00013$$" ON "ATTRIBDW_OWN"."CUBE_BUILD_LOG"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407294C00007$$" ON "ATTRIBDW_OWN"."CUBE_BUILD_LOG" ( PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE DIMENSION COMPILE  CREATE TABLE "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"        (            "ID"               NUMBER,            "SEQ_NUMBER"       NUMBER,            "ERROR#"           NUMBER(8,0) NOT NULL ENABLE,            "ERROR_MESSAGE"    VARCHAR2(2000 BYTE),            "DIMENSION"        VARCHAR2(100 BYTE),            "DIMENSION_MEMBER" VARCHAR2(100 BYTE),            "MEMBER_ANCESTOR"  VARCHAR2(100 BYTE),            "HIERARCHY1"       VARCHAR2(100 BYTE),            "HIERARCHY2"       VARCHAR2(100 BYTE),            "ERROR_CONTEXT" CLOB        )        SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "ATTRIBDW_DATA" LOB        (            "ERROR_CONTEXT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR#"IS    'Error number being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR_MESSAGE"IS    'Error text being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."DIMENSION"IS    'Name of dimension being compiled';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."DIMENSION_MEMBER"IS    'Problem dimension member';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."MEMBER_ANCESTOR"IS    'Problem dimension member''s parent';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."HIERARCHY1"IS    'First hierarchy involved in error';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."HIERARCHY2"IS    'Second hierarchy involved in error';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR_CONTEXT"IS    'Extra information for error';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"IS    'Cube dimension compile log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407307C00010$$" ON "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE( INITIAL 1048576 NEXT 1048576 MAXEXTENTS 2147483645) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE OPERATING LOG  CREATE TABLE "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"        (            "INST_ID"    NUMBER NOT NULL ENABLE,            "SID"        NUMBER NOT NULL ENABLE,            "SERIAL#"    NUMBER NOT NULL ENABLE,            "USER#"      NUMBER NOT NULL ENABLE,            "SQL_ID"     VARCHAR2(13 BYTE),            "JOB"        NUMBER,            "ID"         NUMBER,            "PARENT_ID"  NUMBER,            "SEQ_NUMBER" NUMBER,            "TIME" TIMESTAMP (6)WITH TIME ZONE NOT NULL ENABLE,        "LOG_LEVEL"    NUMBER(4,0) NOT NULL ENABLE,        "DEPTH"        NUMBER(4,0),        "OPERATION"    VARCHAR2(15 BYTE) NOT NULL ENABLE,        "SUBOPERATION" VARCHAR2(20 BYTE),        "STATUS"       VARCHAR2(10 BYTE) NOT NULL ENABLE,        "NAME"         VARCHAR2(20 BYTE) NOT NULL ENABLE,        "VALUE"        VARCHAR2(4000 BYTE),        "DETAILS" CLOB        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "DETAILS"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."INST_ID"IS    'Instance ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SID"IS    'Session ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SERIAL#"IS    'Session serial#';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."USER#"IS    'User ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SQL_ID"IS    'Executing SQL statement ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."JOB"IS    'Identifier of job';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."PARENT_ID"IS    'Parent operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."TIME"IS    'Time of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."LOG_LEVEL"IS    'Verbosity level of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."DEPTH"IS    'Nesting depth of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."OPERATION"IS    'Current operation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SUBOPERATION"IS    'Current suboperation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."STATUS"IS    'Status of current operation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."NAME"IS    'Name of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."VALUE"IS    'Value of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."DETAILS"IS    'Extra information for record';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"IS    'Cube operations log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407301C00018$$" ON "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE REJECTED RECORDS CREATE TABLE "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"        (            "ID"            NUMBER,            "SEQ_NUMBER"    NUMBER,            "ERROR#"        NUMBER(8,0) NOT NULL ENABLE,            "ERROR_MESSAGE" VARCHAR2(2000 BYTE),            "RECORD#"       NUMBER(38,0),            "SOURCE_ROW" ROWID,            "REJECTED_RECORD" CLOB        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "REJECTED_RECORD"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ERROR#"IS    'Error number being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ERROR_MESSAGE"IS    'Error text being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."RECORD#"IS    'Rejected record number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."SOURCE_ROW"IS    'Rejected record''s ROWID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."REJECTED_RECORD"IS    'Rejected record copy';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"IS    'Cube rejected records log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407304C00007$$" ON "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;

    Read the article

  • How to upgrade a remote server from 8.10 to newer version?

    - by DisgruntledGoat
    I have a remote server still running Ubuntu 8.10 9.04 that I can only access via SSH. If I run apt-get update I get a bunch of 404 errors on the packages. I've asked a few questions on Server Fault but got nowhere. Here's what I've done: Run apt-get update which returns errors like: Err http://gb.archive.ubuntu.com intrepid/main Packages 404 Not Found [and same for many other packages] Run do-release-upgrade which returns: Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Edited /etc/update-manager/release-upgrades and changed from Prompt=normal to Prompt=lts (as suggested here). Running do-release-upgrade after this returns: Checking for a new ubuntu release current dist not found in meta-release file No new release found (Updated) I have followed the advice in this question and changed /etc/apt/sources.list to refer to jaunty instead of intrepid. However, that distro is not online anymore either. A comment there says I have to upgrade in chronological order... So basically, it seems like I cannot upgrade because my current distro is out of date and not supported. Is there a way to upgrade direct to 10.x or 11.x? Note, as this is a server I only have command-line access. UPDATE 24/11: I have managed to upgrade from 8.10 to 9.04. Ubuntu's EOL Upgrades page provides some alternate URLs for apt sources. I also needed to update /var/lib/update-manager/meta-release to point to the old-releases server too. However, now I cannot upgrade from 9.04 to 9.10. Running do-release-upgrade produces the same error as #2 above, except it "Failed to fetch" (the URLs in meta-release are valid). The Ubuntu Jaunty upgrade page says it's necessary to upgrade using a CD image. I followed the instructions here, but it didn't work: A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade is now aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/tmp.JLhTwVUugb/karmic", line 7, in sys.exit(main()) File "/tmp/tmp.JLhTwVUugb/DistUpgradeMain.py", line 132, in main if app.run(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1590, in run return self.fullUpgrade() File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1506, in fullUpgrade if not self.doPostInitialUpdate(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 762, in doPostInitialUpdate self.quirks.run("PostInitialUpdate") File "/tmp/tmp.JLhTwVUugb/DistUpgradeQuirks.py", line 83, in run for plugin in self.plugin_manager.get_plugins(condition): File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 167, in get_plugins filenames = self.get_plugin_files() File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 120, in get_plugin_files basenames = [x for x in os.listdir(dirname) OSError: [Errno 2] No such file or directory: './plugins' It does say to report the bug, but since this is an old unsupported release I don't know if it's worth doing. However, is there a way round this, to upgrade from 9.04 to 9.10 (And then finally to 10.04 LTS.)

    Read the article

  • Patterns for a tree of persistent data with multiple storage options?

    - by Robin Winslow
    I have a real-world problem which I'll try to abstract into an illustrative example. So imagine I have data objects in a tree, where parent objects can access children, and children can access parents: // Interfaces interface IParent<TChild> { List<TChild> Children; } interface IChild<TParent> { TParent Parent; } // Classes class Top : IParent<Middle> {} class Middle : IParent<Bottom>, IChild<Top> {} class Bottom : IChild<Middle> {} // Usage var top = new Top(); var middles = top.Children; // List<Middle> foreach (var middle in middles) { var bottoms = middle.Children; // List<Bottom> foreach (var bottom in bottoms) { var middle = bottom.Parent; // Access the parent var top = middle.Parent; // Access the grandparent } } All three data objects have properties that are persisted in two data stores (e.g. a database and a web service), and they need to reflect and synchronise with the stores. Some objects only request from the web service, some only write to it. Data Mapper My favourite pattern for data access is Data Mapper, because it completely separates the data objects themselves from the communication with the data store: class TopMapper { public Top FetchById(int id) { var top = new Top(DataStore.TopDataById(id)); top.Children = MiddleMapper.FetchForTop(Top); return Top; } } class MiddleMapper { public Middle FetchById(int id) { var middle = new Middle(DataStore.MiddleDataById(id)); middle.Parent = TopMapper.FetchForMiddle(middle); middle.Children = BottomMapper.FetchForMiddle(bottom); return middle; } } This way I can have one mapper per data store, and build the object from the mapper I want, and then save it back using the mapper I want. There is a circular reference here, but I guess that's not a problem because most languages can just store memory references to the objects, so there won't actually be infinite data. The problem with this is that every time I want to construct a new Top, Middle or Bottom, it needs to build the entire object tree within that object's Parent or Children property, with all the data store requests and memory usage that that entails. And in real life my tree is much bigger than the one represented here, so that's a problem. Requests in the object In this the objects request their Parents and Children themselves: class Middle { private List<Bottom> _children = null; // cache public List<Bottom> Children { get { _children = _children ?? BottomMapper.FetchForMiddle(this); return _children; } set { BottomMapper.UpdateForMiddle(this, value); _children = value; } } } I think this is an example of the repository pattern. Is that correct? This solution seems neat - the data only gets requested from the data store when you need it, and thereafter it's stored in the object if you want to request it again, avoiding a further request. However, I have two different data sources. There's a database, but there's also a web service, and I need to be able to create an object from the web service and save it back to the database and then request it again from the database and update the web service. This also makes me uneasy because the data objects themselves are no longer ignorant of the data source. We've introduced a new dependency, not to mention a circular dependency, making it harder to test. And the objects now mask their communication with the database. Other solutions Are there any other solutions which could take care of the multiple stores problem but also mean that I don't need to build / request all the data every time?

    Read the article

  • Multiple Zend application code organisation

    - by user966936
    For the past year I have been working on a series of applications all based on the Zend framework and centered on a complex business logic that all applications must have access to even if they don't use all (easier than having multiple library folders for each application as they are all linked together with a common center). Without going into much detail about what the project is specifically about, I am looking for some input (as I am working on the project alone) on how I have "grouped" my code. I have tried to split it all up in such a way that it removes dependencies as much as possible. I'm trying to keep it as decoupled as I logically can, so in 12 months time when my time is up anyone else coming in can have no problem extending on what I have produced. Example structure: applicationStorage\ (contains all applications and associated data) applicationStorage\Applications\ (contains the applications themselves) applicationStorage\Applications\external\ (application grouping folder) (contains all external customer access applications) applicationStorage\Applications\external\site\ (main external customer access application) applicationStorage\Applications\external\site\Modules\ applicationStorage\Applications\external\site\Config\ applicationStorage\Applications\external\site\Layouts\ applicationStorage\Applications\external\site\ZendExtended\ (contains extended Zend classes specific to this application example: ZendExtended_Controller_Action extends zend_controller_Action ) applicationStorage\Applications\external\mobile\ (mobile external customer access application different workflow limited capabilities compared to full site version) applicationStorage\Applications\internal\ (application grouping folder) (contains all internal company applications) applicationStorage\Applications\internal\site\ (main internal application) applicationStorage\Applications\internal\mobile\ (mobile access has different flow and limited abilities compared to main site version) applicationStorage\Tests\ (contains PHP unit tests) applicationStorage\Library\ applicationStorage\Library\Service\ (contains all business logic, services and servicelocator; these are completely decoupled from Zend framework and rely on models' interfaces) applicationStorage\Library\Zend\ (Zend framework) applicationStorage\Library\Models\ (doesn't know services but is linked to Zend framework for DB operations; contains model interfaces and model datamappers for all business objects; examples include Iorder/IorderMapper, Iworksheet/IWorksheetMapper, Icustomer/IcustomerMapper) (Note: the Modules, Config, Layouts and ZendExtended folders are duplicated in each application folder; but i have omitted them as they are not required for my purposes.) For the library this contains all "universal" code. The Zend framework is at the heart of all applications, but I wanted my business logic to be Zend-framework-independent. All model and mapper interfaces have no public references to Zend_Db but actually wrap around it in private. So my hope is that in the future I will be able to rewrite the mappers and dbtables (containing a Models_DbTable_Abstract that extends Zend_Db_Table_Abstract) in order to decouple my business logic from the Zend framework if I want to move my business logic (services) to a non-Zend framework environment (maybe some other PHP framework). Using a serviceLocator and registering the required services within the bootstrap of each application, I can use different versions of the same service depending on the request and which application is being accessed. Example: all external applications will have a service_auth_External implementing service_auth_Interface registered. Same with internal aplications with Service_Auth_Internal implementing service_auth_Interface Service_Locator::getService('Auth'). I'm concerned I may be missing some possible problems with this. One I'm half-thinking about is a config.ini file for all externals, then a separate application config.ini overriding or adding to the global external config.ini. If anyone has any suggestions I would be greatly appreciative. I have used contextswitching for AJAX functions within the individual applications, but there is a big chance both external and internal will get web services created for them. Again, these will be separated due to authorization and different available services. \applicationstorage\Applications\internal\webservice \applicationstorage\Applications\external\webservice

    Read the article

  • High Resolution Timeouts

    - by user12607257
    The default resolution of application timers and timeouts is now 1 msec in Solaris 11.1, down from 10 msec in previous releases. This improves out-of-the-box performance of polling and event based applications, such as ticker applications, and even the Oracle rdbms log writer. More on that in a moment. As a simple example, the poll() system call takes a timeout argument in units of msec: System Calls poll(2) NAME poll - input/output multiplexing SYNOPSIS int poll(struct pollfd fds[], nfds_t nfds, int timeout); In Solaris 11, a call to poll(NULL,0,1) returns in 10 msec, because even though a 1 msec interval is requested, the implementation rounds to the system clock resolution of 10 msec. In Solaris 11.1, this call returns in 1 msec. In specification lawyer terms, the resolution of CLOCK_REALTIME, introduced by POSIX.1b real time extensions, is now 1 msec. The function clock_getres(CLOCK_REALTIME,&res) returns 1 msec, and any library calls whose man page explicitly mention CLOCK_REALTIME, such as nanosleep(), are subject to the new resolution. Additionally, many legacy functions that pre-date POSIX.1b and do not explicitly mention a clock domain, such as poll(), are subject to the new resolution. Here is a fairly comprehensive list: nanosleep pthread_mutex_timedlock pthread_mutex_reltimedlock_np pthread_rwlock_timedrdlock pthread_rwlock_reltimedrdlock_np pthread_rwlock_timedwrlock pthread_rwlock_reltimedwrlock_np mq_timedreceive mq_reltimedreceive_np mq_timedsend mq_reltimedsend_np sem_timedwait sem_reltimedwait_np poll select pselect _lwp_cond_timedwait _lwp_cond_reltimedwait semtimedop sigtimedwait aiowait aio_waitn aio_suspend port_get port_getn cond_timedwait cond_reltimedwait setitimer (ITIMER_REAL) misc rpc calls, misc ldap calls This change in resolution was made feasible because we made the implementation of timeouts more efficient a few years back when we re-architected the callout subsystem of Solaris. Previously, timeouts were tested and expired by the kernel's clock thread which ran 100 times per second, yielding a resolution of 10 msec. This did not scale, as timeouts could be posted by every CPU, but were expired by only a single thread. The resolution could be changed by setting hires_tick=1 in /etc/system, but this caused the clock thread to run at 1000 Hz, which made the potential scalability problem worse. Given enough CPUs posting enough timeouts, the clock thread could be a performance bottleneck. We fixed that by re-implementing the timeout as a per-CPU timer interrupt (using the cyclic subsystem, for those familiar with Solaris internals). This decoupled the clock thread frequency from timeout resolution, and allowed us to improve default timeout resolution without adding CPU overhead in the clock thread. Here are some exceptions for which the default resolution is still 10 msec. The thread scheduler's time quantum is 10 msec by default, because preemption is driven by the clock thread (plus helper threads for scalability). See for example dispadmin, priocntl, fx_dptbl, rt_dptbl, and ts_dptbl. This may be changed using hires_tick. The resolution of the clock_t data type, primarily used in DDI functions, is 10 msec. It may be changed using hires_tick. These functions are only used by developers writing kernel modules. A few functions that pre-date POSIX CLOCK_REALTIME mention _SC_CLK_TCK, CLK_TCK, "system clock", or no clock domain. These functions are still driven by the clock thread, and their resolution is 10 msec. They include alarm, pcsample, times, clock, and setitimer for ITIMER_VIRTUAL and ITIMER_PROF. Their resolution may be changed using hires_tick. Now back to the database. How does this help the Oracle log writer? Foreground processes post a redo record to the log writer, which releases them after the redo has committed. When a large number of foregrounds are waiting, the release step can slow down the log writer, so under heavy load, the foregrounds switch to a mode where they poll for completion. This scales better because every foreground can poll independently, but at the cost of waiting the minimum polling interval. That was 10 msec, but is now 1 msec in Solaris 11.1, so the foregrounds process transactions faster under load. Pretty cool.

    Read the article

  • ??????DataGuard?????????

    - by JaneZhang(???)
         ??????Apply,???log_archive_dest_n ?????“DELAY=",??:DELAY=360(?????),????360??(6??)???:SQL>alter system set log_archive_dest_2='SERVICE=standby LGWR SYNC AFFIRM DELAY=360 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) COMPRESSION=ENABLE  DB_UNIQUE_NAME=standby';    ??????DELAY??,??????????,???30???    ??????,?????????????(real-time apply ),DELAY????????,????????????,??,????alert log?????????????:WARNING: Managed Standby Recovery started with USING CURRENT LOGFILEDELAY 360 minutes specified at primary ignored <<<<<<<<<    ?????,??????????,?????????MRP,??:SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; ???????????:1. ?????????:SQL> show parameter log_archive_dest_2 NAME                                 TYPE        VALUE------------------------------------ ----------- ------------------------------log_archive_dest_2                   string      SERVICE=STANDBY LGWR SYNC AFFI                                                RM VALID_FOR=(ONLINE_LOGFILES,                                                PRIMARY_ROLE) DB_UNIQUE_NAME=S                                                TANDBY 2. ???????5??:SQL> alter system set log_archive_dest_2='SERVICE=STANDBY LGWR SYNC AFFIRM delay=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'; 3. ??????: ????:SQL> alter system switch logfile;System altered. SQL>  select max(sequence#) from v$archived_log; MAX(SEQUENCE#)--------------           28 ??:Wed Jun 13 19:48:53 2012Archived Log entry 14 added for thread 1 sequence 28 ID 0x4c9d8928 dest 1:ARCb: Archive log thread 1 sequence 28 available in 5 minute(s)Wed Jun 13 19:48:54 2012Media Recovery Delayed for 5 minute(s) (thread 1 sequence 28) <<<<<<<<????Wed Jun 13 19:53:54 2012Media Recovery Log /home/oracle/arch1/standby/1_28_757620395.arc<<<<<5??????????Media Recovery Waiting for thread 1 sequence 29 (in transit) ?????,???????:http://docs.oracle.com/cd/E11882_01/server.112/e25608/log_apply.htmOracle® Data Guard Concepts and Administration11g Release 2 (11.2)Part Number E25608-03

    Read the article

  • ??GoldenGate?LAG???

    - by Liu Maclean(???)
    GGSCI????LAG?? ????????????????Oracle?redo????online redo logfile? ? Replicat????????????????? ???????? ????,?????????????????LAG; ????????????????REPLICAT??apply???????????? OGG????RANGE??????????,????????REPLICATE??APPLY? OGG??MAXTRANSOPS???????? LAG?????????: ?Extract?????redolog????TRAIL?REMOTE HOST ????datapump???extract trail????????????REMOTE HOST ?collector?????????????????LOCAL TRAIL ?REPLICAT??LOCAL TRAIL???????? ?????????GGSCI?INFO?STATUS??????LAG,???SEND ???,LAG?????LAG?????: INFO??????LAG???SEND??????????? INFO?????LAG???MANAGER????????checkpoint SEND <OBJECT>, lag???LAG???<OBJECT>???????????? LAG?????????????????Kilobytes??? ????LAG??? ????????????? ? EXTRACT/PUMP/REPLICAT???????? ?2?????????, ???? LAG???EXTRACT??????? ??EXTRACT/PUMP/REPLICAT??????????????? REAL TIME,???LAG????? ?????????????? ????????REDO LOG?????????,?LAG???ER???????,?????????????? ??????,STOP EXTRACT?????????????????LAG,????EXTRACT?????,??EXTRACT????????? ????REDO LOG???? ?EXTRACT??????????????????? GGSCI (XIANGBLI-CN) 27> stop load2 Sending STOP request to EXTRACT LOAD2 … Request processed. GGSCI (XIANGBLI-CN) 28> start load2 Sending START request to MANAGER … EXTRACT LOAD2 starting GGSCI (XIANGBLI-CN) 31> info load2 EXTRACT    LOAD2     Last Started 2012-09-18 20:26   Status RUNNING Checkpoint Lag       00:04:34 (updated 00:00:08 ago) Log Read Checkpoint  Oracle Redo Logs 2012-09-18 20:21:32  Seqno 44, RBA 13750272 SCN 0.1845479 (1845479) GGSCI (XIANGBLI-CN) 35> lag load2 Sending GETLAG request to EXTRACT LOAD2 … Last record lag: 130 seconds. At EOF, no more records to process. GGSCI (XIANGBLI-CN) 36> info load2 EXTRACT    LOAD2     Last Started 2012-09-18 20:26   Status RUNNING Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Log Read Checkpoint  Oracle Redo Logs 2012-09-18 20:27:33  Seqno 44, RBA 13817856 SCN 0.1845671 (1845671) ?????? Last record lag ? Checkpoint Lag ???? EXTRACT/PUMP/REPLICAT ?????????????(catch up), ???? ?????????????GB?redo???,??????EXTRACT/PUMP/REPLICAT ????????? ???INFO?LAG???checkpoint?,????????????Long Running Transactions (LRTs),??????????COMMIT? ????????????????????????COMMIT?????? ????EXTRACT/PUMP/REPLICAT???????????????????????commit????? ??REPLICAT????MAXTRANSOPS ?????LAG?

    Read the article

  • Installing an exe with Powershell DSC Package resource gets return code 1619

    - by Jay Spang
    I'm trying to use Powershell DSC's Package resource to install an exe... Perforce's P4V to be specific. Here's my code: Configuration PerforceMachine { Node "SERVERNAME" { Package P4V { Ensure = "Present" Name = "Perforce Visual Components" Path = "\\nas\share\p4vinst64.exe" ProductId = '' Arguments = "/S /V/qn" # args for silent mode LogPath = "$env:ProgramData\p4v_install.log" } } } When running this, this is the error Powershell gives me: PowerShell provider MSFT_PackageResource failed to execute Set-TargetResource functionality with error message: The return code 1619 was not expected. Configuration is likely not correct + CategoryInfo : InvalidOperation: (:) [], CimException + FullyQualifiedErrorId : ProviderOperationExecutionFailure + PSComputerName : SERVERNAME According to documentation, return code 1619 means the MSI package couldn't be opened. However, when I manually log in to the machine and run "\\nas\share\p4vinst64.exe /S /V/qn", the install works flawlessly. Does anyone know why this is failing? Alternately, can anyone tell me how to troubleshoot this? I pasted all the error information I got from the terminal, my log file (p4v_install.log) is a 0 byte file, and there are no events in the event viewer. I don't know how to troubleshoot it any further! EDIT: I should note that I also tried using the File resource to copy the file locally, and then install it from there. Sadly, that met with the same result.

    Read the article

  • Applying Test Driven Development to a tightly coupled architecture

    - by Chris D
    Hi all, I've recently been studying TDD, attended a conference and have dabbled in few tests and already I'm 100% sold, I absolutely love it TDD. As a result I've raised this with my seniors and they are prepared to give it a chance, so they have tasked me with coming up with a way to implement TDD in the development of our enterprise product. The problem is our system has evolved since the days of VB6 to .NET and implements alot of legacy technology and some far from best practice development techniques i.e. alot of business logic in the ASP.NET code behind and client script. The largest problem however is how our classes are tightly coupled with database access; properties, methods, constructors - usually has some database access in some form or another. We use an in-house data access code generator tool that creates sqlDataAdapters that gives us all the database access we could ever want, which helps us develop extremely quickly, however, classes in our business layer are very tightly coupled to this data layer - we aren't even close to implementing some form of repository design. This and the issues above have created me all sorts of problems. I have tried to develop some unit tests for some existing classes I've already written but the tests take ALOT longer to run since db access is required, not to mention since we use the MS Enterprise Caching framework I am forced to fake a httpcontext for my tests to run successfully which isn't practical. Also, I can't see how to use TDD to drive the design of any new classes I write since they have to be soo tightly coupled to the database ... help! Because of the architecture of the system it appears I can't implement TDD without some real hack which in my eyes just defeats the aim of TDD and the huge benefits that come with. Does anyone have any suggestions how I could implement TDD with the constraints I'm bound too? or do I need to push the repository design pattern down my seniors throats and tell them we either change our architecture/development methodology or forget about TDD altogether? :) Thanks

    Read the article

  • WPF: How to set column width with auto fill in ListView with custom user control.

    - by powerk
    A ListView with Datatemplate in GridViewColumn: <ListView Name ="LogDataList" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding LogDataCollection}" Background="Cyan"> <ListView.View> <GridView AllowsColumnReorder="true" ColumnHeaderToolTip="Event Log Information"> <GridViewColumn Header="Event Log Name" Width="100"> <GridViewColumn.CellTemplate> <DataTemplate> <l:MyTextBlock Height="25" DataContext="{Binding LogName, Converter={StaticResource DataFieldConverter}}" HighlightMatchCase="{Binding Element}" Loaded="EditBox_Loaded"/> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> ... </GridView> </ListView.View> </ListView> I have no idea about how to make column width autofill although I have tried a lot of way to walk up. The general idea for demo is : <ListView Name ="LogDataList" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding LogDataCollection}" Background="Cyan"> <ListView.Resources> <Style x:Key="ColumnWidthStyle" TargetType="{x:Null GridViewColumn}"> <Style.Setters> <Setter Property="HorizontalContentAlignment" Value="Stretch" > </Setter> </Style.Setters> </Style> </ListView.Resources> <ListView.View> <GridView AllowsColumnReorder="true" ColumnHeaderToolTip="Event Log Information"> <GridViewColumn Header="Event Log Name" DisplayMemberBinding="{Binding Path=LogName}" HeaderContainerStyle="{StaticResource ColumnWidthStyle}"> It works, but not accord with my demand. I need to customize datatemplate with my custom user control(MyTextBlock) since the enhancement(HighlighMatchCase property) and binding datacontext. How can I set up ColumnWidthMode with Fill in the word? On-line'in. I really appreciate your help.

    Read the article

  • Write-only collections in MongoDB

    - by rcoder
    I'm currently using MongoDB to record application logs, and while I'm quite happy with both the performance and with being able to dump arbitrary structured data into log records, I'm troubled by the mutability of log records once stored. In a traditional database, I would structure the grants for my log tables such that the application user had INSERT and SELECT privileges, but not UPDATE or DELETE. Similarly, in CouchDB, I could write a update validator function that rejected all attempts to modify an existing document. However, I've been unable to find a way to restrict operations on a MongoDB database or collection beyond the three access levels (no access, read-only, "god mode") documented in the security topic on the MongoDB wiki. Has anyone else deployed MongoDB as a document store in a setting where immutability (or at least change tracking) for documents was a requirement? What tricks or techniques did you use to ensure that poorly-written or malicious application code could not modify or destroy existing log records? Do I need to wrap my MongoDB logging in a service layer that enforces the write-only policy, or can I use some combination of configuration, query hacking, and replication to ensure a consistent, audit-able record is maintained?

    Read the article

  • XAMPP Redirecting to XAMPP directory on Internal Network

    - by user302252
    I installed XAMPP on my desktop. I set up vhosts for about 5 sites and they are all working properly from the desktop itself. The problem arises whenever I try to access these vhosts from my laptop. I changed the hosts file on the laptop to redirect the laptop dev.domain.com requests to the desktop, however, when I try to access these sites from my laptop on the local network, I only receive the XAMPP welcome screen. It seems like when trying to access the vhosts on the desktop from the laptop the vhosts file is ignored as all requests are redirected to the xampp directory. What might I need to adjust to ensure access to the vhosts on the desktop from the laptop?

    Read the article

  • Using npgsql with SSIS

    - by Flash
    Hi, I am using npgsql as the ADO.NET provider in SSIS to connect to Postgres and create some workflows. I am able to connect to postgres but am unable to use the "Table or view" data access mode to list the tables and views. I have to resort to using "SQL command" data access mode. For the "table or view" access mode, the log shows that the last call is GetDataSourceInformation and it returns successfully. After that there is a close connection. Has anyone successfully used "Table or view" data access mode using npgsql? Thanks, Flash

    Read the article

  • mgtwitterengine and oauth 401 error: Boggled

    - by Jason
    OK... so here is my code: twitterEngine = [[MGTwitterEngine alloc] initWithDelegate:self]; [twitterEngine setConsumerKey:CONSUMER_KEY secret:CONSUMER_SECRET]; accessToken = [twitterEngine getXAuthAccessTokenForUsername:profile.twitterUserId password:profile.twitterPassword]; NSLog(@"Access token: %@", accessToken); the console shows the access token returned just fine (so it seems to work) eg. Access token: C8A24515-0F11-4B5A-8813-XXXXXXXXXXXXXX but instead of accessTokenReceived method being called next on my delegate, it calls requestFailed with a 401. How can I be getting a 401 unauthorized and getting an access token back from the method call?????

    Read the article

  • SSRS 2008 Report Manager Error

    - by Nick
    I have just installed SQL Server 2008 including Reporting Services on Windows Server 2003. I'm having a problem though accessing the Report Manager. When the Reporting Service is first started I can access it fine but after maybe an hour when I try and access it I get an error saying: Unable to connect to the remote server. The reporting service is still running at this point. I can connect to it through Reporting Services Configuration Manager and clicking on the Web Service URL gives a directory listing (I assume that is correct behaviour). If I stop and start the service through Reporting Services Configuration Manager then I can access Report Manager once again (although in maybe an hour I will get the same error once again). I've installed the latest SP1 service pack. I'm using the same domain account to run all the SQL services. The report server is set to use the default ReportServer virtual directory, is set to IP address All Assigned, TCP Port 80 and no SSL certificate. The report manager is set to use the default Reports virtual directory, IP address All Assigned, TCP Port 80 and no SSL certificates. In the log file I get an error: Unable to connect to remote server HTTP status code 500 An attempt was made to access a socket in a way forbidden by its access permissions. Does anyone have any idea why this is happening? I've searched the net but haven't been able to find a solution.

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • Weblogic is slow to start (11mins) under VM (VirtualBox and VMware)

    - by Vladimir Dyuzhev
    (SOLVED! BY FAKING SYSTEM RANDOM GENERATOR, SEE BELOW) I'm setting up a VM image for my dev/build team. Inside that VM a Weblogic domain should be running. I use Ububtu server distro, WLS 9.2MP3 + ALSB. Everything works OK, quite fast, but at the start time the WLS stops twice for a measurable amounts of time. Two stops in total amount to about 10 minutes delay. For tasks where deployment requires server restart it's very annoying. :-( Sleeping time is not constant, sometimes the server starts very fast, sometimes so-so, sometimes 10 minutes or more. Interesting that if I press Enter while looking at the stopped server, it wakes up much faster, sometimes after a few seconds. WLST (Weblogic Jython shell) is also hanging for quite a time when executed in VM. It doesn't react to Enter though. Here must be some developers who run WLS with a VM. I wonder if others have the same problem? Was someone able to solve it? Here's the server output (just for a case): Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_12-b04) Java HotSpot(TM) Client VM (build 1.5.0_12-b04, mixed mode) Starting WLS with line: /shared2/beahome/jdk150_12/bin/java -client -Xmx256m -XX:MaxPermSize=128m -Xverify:none -da -Dplatform.home=/shared2/beahome/weblogic92 -Dwls.home=/shared2/beahome/weblogic92/server -Dwli.home=/shared2/beahome/weblogic92/integration -Dweblogic.management.discover=true -Dwl w.iterativeDev= -Dwlw.testConsole= -Dwlw.logErrorsToConsole= -Dweblogic.ext.dirs=/shared2/beahome/patch_weblogic923/profiles/default/sysext_ manifest_classpath -Dweblogic.management.username=admin -Dweblogic.management.password=wlsadmin -Dweblogic.Name=LOGMGR-admin -Djava.security .policy=/shared2/beahome/weblogic92/server/lib/weblogic.policy weblogic.Server <1-Apr-2010 12:47:22 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000395> <Following extensions directory contents added to the end of the classpath: /shared2/beahome/weblogic92/platform/lib/p13n/p13n-schemas.jar:/shared2/beahome/weblogic92/platform/lib/p13n/p13n_common.jar:/shared2/beahom e/weblogic92/platform/lib/p13n/p13n_system.jar:/shared2/beahome/weblogic92/platform/lib/wlp/netuix_common.jar:/shared2/beahome/weblogic92/pl atform/lib/wlp/netuix_schemas.jar:/shared2/beahome/weblogic92/platform/lib/wlp/netuix_system.jar:/shared2/beahome/weblogic92/platform/lib/wl p/wsrp-common.jar> <1-Apr-2010 12:47:22 o'clock PM GMT-05:00> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) Client VM Ve rsion 1.5.0_12-b04 from Sun Microsystems Inc.> <1-Apr-2010 12:47:23 o'clock PM GMT-05:00> <Info> <Management> <BEA-141107> <Version: WebLogic Server 9.2 MP3 Mon Mar 10 08:28:41 EDT 2008 1096261 > <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Info> <WebLogicServer> <BEA-000215> <Loaded License : /shared2/beahome/license.bea> <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING> <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool> <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Notice> <Log Management> <BEA-170019> <The server log file /shared2/wldomains/beaadmd/LOGMGR/ser vers/LOGMGR-admin/logs/LOGMGR-admin.log is opened. All server side log events will be written to this file.> Here we have the first delay, up to 5 mins... <1-Apr-2010 12:53:21 o'clock PM GMT-05:00> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.> <1-Apr-2010 12:53:24 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STANDBY> <1-Apr-2010 12:53:24 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING> <1-Apr-2010 12:53:25 o'clock PM GMT-05:00> <Notice> <Log Management> <BEA-170027> <The server initialized the domain log broadcaster success fully. Log messages will now be broadcasted to the domain log.> <1-Apr-2010 12:53:25 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN> <1-Apr-2010 12:53:25 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RESUMING> <1-Apr-2010 12:53:28 o'clock PM GMT-05:00> <Notice> <Security> <BEA-090171> <Loading the identity certificate and private key stored under t he alias adminuialias from the jks keystore file /shared2/wldomains/beaadmd/LOGMGR/CustomIdentity.jks.> And here is the second, again up to 5 mins. <1-Apr-2010 12:58:56 o'clock PM GMT-05:00> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file /shared 2/wldomains/beaadmd/LOGMGR/CustomTrust.jks.> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <Server> <BEA-002613> <Channel "DefaultSecure" is now listening on 192.168.56.102:7002 f or protocols iiops, t3s, ldaps, https.> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <Server> <BEA-002613> <Channel "Default" is now listening on 192.168.56.102:8012 for pro tocols iiop, t3, ldap, http.> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000331> <Started WebLogic Admin Server "LOGMGR-admin" for domain " LOGMGR" running in Development Mode> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000360> <Server started in RUNNING mode> UPDATE I think I've got the track: it must be the randon seed initialization. That may explain why generating keyboard events release the server. I've made the thread dump, and one thread is in runnable state, but waiting: "[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'" daemon prio=1 tid=0x0a7b06e8 nid=0xeda runnable [0x728a500 0..0x728a6d80] at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:194) at sun.security.provider.NativePRNG$RandomIO.readFully(NativePRNG.java:185) at sun.security.provider.NativePRNG$RandomIO.implGenerateSeed(NativePRNG.java:202) - locked <0x7d928c78> (a java.lang.Object) at sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:108) at sun.security.provider.NativePRNG.engineGenerateSeed(NativePRNG.java:102) at java.security.SecureRandom.generateSeed(SecureRandom.java:475) at weblogic.security.AbstractRandomData.ensureInittedAndSeeded(AbstractRandomData.java:83) SOLVED Weblogic uses SecureRandom to init security subsystem. SecureRandom by default uses /dev/urandom file. For some reason, reading this file under VM comes to halt quite often. Generating console events helps to create more randomness, and release the WLS. For the test purposes I have changed jre/lib/security/java.security file, property to securerandom.source=file:/tmp/big.random.file. Weblogic now starts in 15 seconds.

    Read the article

  • Remote EJB lookup issue with WebSphere 6.1

    - by marc dauncey
    I've seen this question asked before, but I've tried various solutions proposed, to no avail. Essentially, I have two EJB enterprise applications, that need to communicate with one another. The first is a web application, the second is a search server - they are located on different development servers, not in the same node, cell, or JVM, although they are on the same physical box. I'm doing the JNDI lookup via IIOP, and the URL I am using is as follows: iiop://searchserver:2819 In my hosts file, I've set searchserver to 127.0.0.1. The ports for my search server are bound to this hostname too. However, when the web app (that uses Spring btw) attempts to lookup the search EJB, it fails with the following error. This is driving me nuts, surely this kind of comms between the servers should be fairly simple to get working. I've checked the ports and they are correct. I note that the exception says the initial context is H00723Node03Cell/nodes/H00723Node03/servers/server1, name: ejb/com/hmv/dataaccess/ejb/hmvsearch/HMVSearchHome. This is the web apps server NOT the search server. Is this correct? How can I get Spring to use the right context? [08/06/10 17:14:28:655 BST] 00000028 SystemErr R org.springframework.remoting.RemoteLookupFailureException: Failed to locate remote EJB [ejb/com/hmv/dataaccess/ejb/hmvsearch/HMVSearchHome]; nested exception is javax.naming.NameNotFoundException: Context: H00723Node03Cell/nodes/H00723Node03/servers/server1, name: ejb/com/hmv/dataaccess/ejb/hmvsearch/HMVSearchHome: First component in name hmvsearch/HMVSearchHome not found. [Root exception is org.omg.CosNaming.NamingContextPackage.NotFound: IDL:omg.org/CosNaming/NamingContext/NotFound:1.0] at org.springframework.ejb.access.SimpleRemoteSlsbInvokerInterceptor.doInvoke(SimpleRemoteSlsbInvokerInterceptor.java:101) at org.springframework.ejb.access.AbstractRemoteSlsbInvokerInterceptor.invoke(AbstractRemoteSlsbInvokerInterceptor.java:140) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy7.doSearchByProductKeywordsForKiosk(Unknown Source) at com.hmv.web.usecases.search.SearchUC.execute(SearchUC.java:128) at com.hmv.web.actions.search.SearchAction.executeAction(SearchAction.java:129) at com.hmv.web.actions.search.KioskSearchAction.executeAction(KioskSearchAction.java:37) at com.hmv.web.actions.HMVAbstractAction.execute(HMVAbstractAction.java:123) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at com.hmv.web.controller.HMVActionServlet.process(HMVActionServlet.java:149) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:743) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1282) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1239) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:136) at com.hmv.web.support.SessionFilter.doFilter(SessionFilter.java:137) at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:142) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:121) at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:82) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:670) at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:2933) at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:221) at com.ibm.ws.webcontainer.VirtualHost.handleRequest(VirtualHost.java:210) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:1912) at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:84) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:472) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:411) at com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.java:101) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.requestComplete(WorkQueueManager.java:566) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.attemptIO(WorkQueueManager.java:619) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.workerRun(WorkQueueManager.java:952) at com.ibm.ws.tcp.channel.impl.WorkQueueManager$Worker.run(WorkQueueManager.java:1039) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1462) Caused by: javax.naming.NameNotFoundException: Context: H00723Node03Cell/nodes/H00723Node03/servers/server1, name: ejb/com/hmv/dataaccess/ejb/hmvsearch/HMVSearchHome: First component in name hmvsearch/HMVSearchHome not found. [Root exception is org.omg.CosNaming.NamingContextPackage.NotFound: IDL:omg.org/CosNaming/NamingContext/NotFound:1.0] at com.ibm.ws.naming.jndicos.CNContextImpl.processNotFoundException(CNContextImpl.java:4392) at com.ibm.ws.naming.jndicos.CNContextImpl.doLookup(CNContextImpl.java:1752) at com.ibm.ws.naming.jndicos.CNContextImpl.doLookup(CNContextImpl.java:1707) at com.ibm.ws.naming.jndicos.CNContextImpl.lookupExt(CNContextImpl.java:1412) at com.ibm.ws.naming.jndicos.CNContextImpl.lookup(CNContextImpl.java:1290) at com.ibm.ws.naming.util.WsnInitCtx.lookup(WsnInitCtx.java:145) at javax.naming.InitialContext.lookup(InitialContext.java:361) at org.springframework.jndi.JndiTemplate$1.doInContext(JndiTemplate.java:132) at org.springframework.jndi.JndiTemplate.execute(JndiTemplate.java:88) at org.springframework.jndi.JndiTemplate.lookup(JndiTemplate.java:130) at org.springframework.jndi.JndiTemplate.lookup(JndiTemplate.java:155) at org.springframework.jndi.JndiLocatorSupport.lookup(JndiLocatorSupport.java:95) at org.springframework.jndi.JndiObjectLocator.lookup(JndiObjectLocator.java:105) at org.springframework.ejb.access.AbstractRemoteSlsbInvokerInterceptor.lookup(AbstractRemoteSlsbInvokerInterceptor.java:98) at org.springframework.ejb.access.AbstractSlsbInvokerInterceptor.getHome(AbstractSlsbInvokerInterceptor.java:143) at org.springframework.ejb.access.AbstractSlsbInvokerInterceptor.create(AbstractSlsbInvokerInterceptor.java:172) at org.springframework.ejb.access.AbstractRemoteSlsbInvokerInterceptor.newSessionBeanInstance(AbstractRemoteSlsbInvokerInterceptor.java:226) at org.springframework.ejb.access.SimpleRemoteSlsbInvokerInterceptor.getSessionBeanInstance(SimpleRemoteSlsbInvokerInterceptor.java:141) at org.springframework.ejb.access.SimpleRemoteSlsbInvokerInterceptor.doInvoke(SimpleRemoteSlsbInvokerInterceptor.java:97) ... 36 more Many thanks for any assistance! Marc

    Read the article

  • sharepoint search is not working

    - by Nikkho
    Hi all, I have an issue with SharePoint search. The situation The server is installed with SharePoint on a farm with 2 servers. A new app pool is created and that app pool is using a domain account called moss_service. moss_service is set to be in the administrator group in both server. moss_service is also set to be the db_creator in the content database. When I checked it initially, the search's default content access account is using another different account, I changed that to be using moss_service account. I didn't do IIS reset because this is a production server, they dont want frequent iis reset. Strangely, checking the services.msc under "office sharepoint server search" the account is still using an old one. (and apparently it's only running on 1 server, the other server is not running) I then change that to the following: domain\moss_service with the password. and then I rerun the crawl. How do I diagnose the issue Basically everytime I change something I restart the crawl and then check the event viewer. Multiple things come out but the following is the major ones: The start address cannot be crawled. The password for the content access account cannot be decrypted because it was stored with different credentials. Re-type the password for the account used to crawl this content. (0x80042406) Performance monitoring cannot be initialized for the gatherer object, because the counters are not loaded or the shared memory object cannot be opened. This only affects availability of the perfmon counters. Restart the computer. Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (0x80041205) Crawl Logs Result The crawl log is showing this: The password for the content access account cannot be decrypted because it was stored with different credentials. Re-type the password for the account used to crawl this content. I tried changing it again at service.mstsc and the rerun the full crawl again but then it doesn't work. I have tried entering it using the following way: [email protected] and domain\moss_service My Questions are: How do I fix this? Is this the right way to setup the search? Does the search account has to be using a different domain account? Seemed like one fix complicates the other, how do I set this right? Is it worth it to upgrade to sp2?

    Read the article

  • REST api design to retrieve summary information

    - by Jacques René Mesrine
    I have a scenario in which I have REST API which manages a Resource which we will call Group. A Group is similar in concept to a discussion forum in Google Groups. Now I have two GET access method which I believe needs separate representations. The 1st GET access method retrieves the minimal amount of information about a Group. Given a *group_id* it should return a minimal amount of information like { group_id: "5t7yu8i9io0op", group_name: "Android Developers", is_moderated: true, number_of_users: 34, new_messages: 5, icon: "http://boo.com/pic.png" } The 2nd GET access method retrives summary information which are more statistical in nature like: { group_id: "5t7yu8i9io0op", top_ranking_users: { [ { user: "george", posts: 789, rank: 1 }, { user: "joel", posts: 560, rank: 2 } ...] }, popular_topics: { [ ... ] } } I want to separate these data access methods and I'm currently planning on this design: GET /group/:group_id/ GET /group/:group_id/stat Only the latter will return the statistical information about the group. What do you think about this ?

    Read the article

  • Python: some newbie questions on sys.stderr and using function as argument

    - by Cawas
    I'm just starting on Python and maybe I'm worrying too much too soon, but anyways... log = "/tmp/trefnoc.log" def logThis (text, display=""): msg = str(now.strftime("%Y-%m-%d %H:%M")) + " TREfNOC: " + text if display != None: print msg + display logfile = open(log, "a") logfile.write(msg + "\n") logfile.close() return msg def logThisAndExit (text, display=""): msg = logThis(text, display=None) sys.exit(msg + display) That is working, but I don't like how it looks. Is there a better way to write this (maybe with just 1 function) and is there any other thing I should be concerned under exiting? Now to some background... Sometimes I will call logThis just to log and display. Other times I want to call it and exit. Initially I was doing this: logThis ("ERROR. EXITING") sys.exit() Then I figured that wouldn't properly set the stderr, thus the current code shown on the top. My first idea was actually passing "sys.exit" as an argument, and defining just logThis ("ERROR. EXITING", call=sys.exit) defined as following (showing just the relevant differenced part): def logThis (text, display="", call=print): msg = str(now.strftime("%Y-%m-%d %H:%M")) + " TREfNOC: " + text call msg + display But that obviously didn't work. I think Python doesn't store functions inside variables. I couldn't (quickly) find anywhere if Python can have variables taking functions or not! Maybe using an eval function? I really always try to avoid them, tho. Sure I thought of using if instead of another def, but that wouldn't be any better or worst. Anyway, any thoughts?

    Read the article

  • Memcached broken pipe in rails

    - by abronte
    I seem to run into some random broken pipe errors if I try to access memcached a specific way. When I try to access memcached via Rails.cache, the first one or two reads/writes will result in a "MemCacheError (Broken pipe): Broken pipe" and return nil. But if I access memcached through a new object (ActiveSupport::Cache::MemCacheStore.new) I don't get the broken pipe error at all. I've tried upgrading to memcached 1.4.4 (was previously using 1.2.2) and I still got the error. I'm using rails 2.3.5. I would prefer to access memcached through Rails.cache because I takes namespaces into account. Any ideas?

    Read the article

  • Sending data through POST request from a node.js server to a node.js server

    - by Masiar
    I'm trying to send data through a POST request from a node.js server to another node.js server. What I do in the "client" node.js is the following: var options = { host: 'my.url', port: 80, path: '/login', method: 'POST' }; var req = http.request(options, function(res){ console.log('status: ' + res.statusCode); console.log('headers: ' + JSON.stringify(res.headers)); res.setEncoding('utf8'); res.on('data', function(chunk){ console.log("body: " + chunk); }); }); req.on('error', function(e) { console.log('problem with request: ' + e.message); }); // write data to request body req.write('data\n'); req.write('data\n'); req.end(); This chunk is taken more or less from the node.js website so it should be correct. The only thing I don't see is how to include username and password in the options variable to actually login. This is how I deal with the data in the server node.js (I use express): app.post('/login', function(req, res){ var user = {}; user.username = req.body.username; user.password = req.body.password; ... }); How can I add those username and password fields to the options variable to have it logged in? Thanks

    Read the article

< Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >