Search Results

Search found 51164 results on 2047 pages for 'oracle access manager'.

Page 253/2047 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • Ibator didn't generate Oracle varchar2 field

    - by bugbug
    I have table APP_REQ_APPROVE_COMPARE with following fields: "ID" NUMBER NOT NULL ENABLE, "TRACK_NO" VARCHAR2(20 BYTE) NOT NULL ENABLE, "REQ_DATE" DATE NOT NULL ENABLE, "OFFCODE" CHAR(6 BYTE) NOT NULL ENABLE, "COMPARE_CASE_ID" NUMBER NOT NULL ENABLE, "VEHICLE_NAME" VARCHAR2(100 BYTE), "ENGINE_NO" VARCHAR2(100 BYTE), "BODY_NO" VARCHAR2(100 BYTE), "HOLD_SHIP" NUMBER, "OWNERSHIP" VARCHAR2(200 BYTE), "RENT_NAME" VARCHAR2(200 BYTE), "CONTRACT" VARCHAR2(100 BYTE), "CONTRACT_NO" VARCHAR2(100 BYTE), "CONTRACT_DATE" DATE, "ISLAWBREAKERRENT" CHAR(1 BYTE) NOT NULL ENABLE, "MISTAKE_DETAIL" VARCHAR2(4000 BYTE), "COMPARE_REASON" VARCHAR2(4000 BYTE), "CREATE_BY" NUMBER NOT NULL ENABLE, "CREATE_ON" DATE DEFAULT SYSDATE NOT NULL ENABLE, "UPDATE_BY" NUMBER, "UPDATE_ON" DATE, When I generate a java bean using Ibator , I didn't find trackNo, VehicalName, ... (all fields defined as varchar2). What is the problem in my case? Here is my Ibator configuration file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE ibatorConfiguration PUBLIC "-//Apache Software Foundation//DTD Apache iBATIS Ibator Configuration 1.0//EN" "http://ibatis.apache.org/dtd/ibator-config_1_0.dtd"> <ibatorConfiguration> <classPathEntry location="/dos/connector/oracle_jdbc.jar"/> <ibatorContext id="autoPerson" defaultModelType="flat" targetRuntime="Ibatis2Java2"> <jdbcConnection connectionURL="jdbc:oracle:thin:@192.168.42.144:1521:orcl" driverClass="oracle.jdbc.driver.OracleDriver" userId="user" password="password"/> <javaModelGenerator targetPackage="com.ko.model" targetProject="FormConfig"> <property name="enableSubPackages" value="true"/> <property name="trimStrings" value="true"/> </javaModelGenerator> <sqlMapGenerator targetPackage="com.ko.map" targetProject="FormConfig"> <property name="enableSubPackages" value="true"/> </sqlMapGenerator> <daoGenerator targetPackage="com.ko.model.dao" type="SPRING" targetProject="FormConfig" implementationPackage="com.ko.model.dao.impl" > <property name="enableSubPackges" value="true"/> <property name="methodNameCalculator" value="extended"/> </daoGenerator> <table tableName="APP_REQ_APPROVE_COMPARE" domainObjectName="AppReqApproveCompare"/> <ibatorConfiguration>

    Read the article

  • Add new row in a databound form with a Oracle Sequence as the primary key

    - by Ranhiru
    I am connecting C# with Oracle 11g. I have a DataTable which i fill using an Oracle Data Adapter. OracleDataAdapter da; DataTable dt = new DataTable(); da = new OracleDataAdapter("SELECT * FROM Author", con); da.Fill(dt); I have few text boxes that I have databound to various rows in the data table. txtAuthorID.DataBindings.Add("Text", dt, "AUTHORID"); txtFirstName.DataBindings.Add("Text", dt, "FIRSTNAME"); txtLastName.DataBindings.Add("Text", dt, "LASTNAME"); txtAddress.DataBindings.Add("Text", dt, "ADDRESS"); txtTelephone.DataBindings.Add("Text", dt, "TELEPHONE"); txtEmailAddress.DataBindings.Add("Text", dt, "EMAIL"); I also have a DataGridView below the Text Boxes, showing the contents of the DataTable. dgvAuthor.DataSource = dt; Now when I want to add a new row, i do bm.AddNew(); where bm is defined in Form_Load as BindingManagerBase bm; bm = this.BindingContext[dt]; And when the save button is clicked after all the information is entered and validated, i do this.BindingContext[dt].EndCurrentEdit(); try { da.Update(dt); } catch (Exception ex) { MessageBox.Show(ex.Message); } However the problem comes where when I usually enter a row to the database (using SQL Plus) , I use a my_pk_sequence.nextval for the primary key. But how do i specify that when i add a new row in this method? I catch this exception ORA-01400: cannot insert NULL into ("SYSMAN".AUTHOR.AUTHORID") which is obvious because nothing was specified for the primary key. How do get around this? Thanx a lot in advance :)

    Read the article

  • Oracle doesn't remove cursors after closing result set

    - by Vladimir
    Note: we reuse single connection. ************************************************ public Connection connection() {                try {            if ((connection == null) || (connection.isClosed()))            {               if (connection!=null) log.severe("Connection was closed !");                connection = DriverManager.getConnection(jdbcURL, username, password);            }        } catch (SQLException e) {            log.severe("can't connect: " + e.getMessage());        }        return connection;            } ************************************************** public IngisObject[] select(String query, String idColumnName, String[] columns) { Connection con = connection(); Vector<IngisObject> objects = new Vector<IngisObject>(); try {     Statement stmt = con.createStatement();     String sql = query;     ResultSet rs =stmt.executeQuery(sql);//oracle increases cursors count here     while(rs.next()) {        IngisObject o = new IngisObject("New Result");        o.setIdColumnName(idColumnName);                    o.setDatabase(this);        for(String column: columns) o.attrs().put(column, rs.getObject(column));        objects.add(o);        }     rs.close();// oracle don't decrease cursor count here, while it's expected     stmt.close();     } catch (SQLException ex) {     System.out.println(query);     ex.printStackTrace(); }

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • DBD::Oracle and utf8 issue

    - by goe
    Hi All, I have a problem where my perl code using the latest DBD::Oracle on perl v5.8.8 throws an exception on me when I try to insert characters like 'ñ'. Exception: DBD::Oracle::db do failed: ORA-01756: quoted string not properly terminated (DBD ERROR: OCIStmtPrepare) My $ENV{NLS_LANG} is set to 'AMERICAN_AMERICA.AL32UTF8' These are the DB params based on "SELECT * from NLS_DATABASE_PARAMETERS" 1 NLS_LANGUAGE AMERICAN 2 NLS_TERRITORY AMERICA 3 NLS_CURRENCY $ 4 NLS_ISO_CURRENCY AMERICA 5 NLS_NUMERIC_CHARACTERS ., 6 NLS_CHARACTERSET AL32UTF8 7 NLS_CALENDAR GREGORIAN 8 NLS_DATE_FORMAT DD-MON-RR 9 NLS_DATE_LANGUAGE AMERICAN 10 NLS_SORT BINARY 11 NLS_TIME_FORMAT HH.MI.SSXFF AM 12 NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM 13 NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR 14 NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR 15 NLS_DUAL_CURRENCY $ 16 NLS_COMP BINARY 17 NLS_LENGTH_SEMANTICS BYTE These are perl params based on "$db-ora_nls_parameters()" $VAR1 = { 'NLS_LANGUAGE' => 'AMERICAN', 'NLS_TIME_TZ_FORMAT' => 'HH.MI.SSXFF AM TZR', 'NLS_SORT' => 'BINARY', 'NLS_NUMERIC_CHARACTERS' => '.,', 'NLS_TIME_FORMAT' => 'HH.MI.SSXFF AM', 'NLS_ISO_CURRENCY' => 'AMERICA', 'NLS_COMP' => 'BINARY', 'NLS_CALENDAR' => 'GREGORIAN', 'NLS_DATE_FORMAT' => 'DD-MON-RR', 'NLS_DATE_LANGUAGE' => 'AMERICAN', 'NLS_TIMESTAMP_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM', 'NLS_TERRITORY' => 'AMERICA', 'NLS_LENGTH_SEMANTICS' => 'BYTE', 'NLS_NCHAR_CHARACTERSET' => 'AL16UTF16', 'NLS_DUAL_CURRENCY' => '$', 'NLS_TIMESTAMP_TZ_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM TZR', 'NLS_NCHAR_CONV_EXCP' => 'FALSE', 'NLS_CHARACTERSET' => 'AL32UTF8', 'NLS_CURRENCY' => '$' }; Here are some other strange facts: If I set NLS_LANG to ‘'AMERICAN_AMERICA.UTF8’ the insert executes fine with ‘ñ’ character. If I leave NLS_LANG as ‘'AMERICAN_AMERICA.AL32UTF8' but use ‘Ñ’ the insert will run fine as well.

    Read the article

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • Batch insert mode with hibernate and oracle: seems to be dropping back to slow mode silently

    - by Chris
    I'm trying to get a batch insert working with Hibernate into Oracle, according to what i've read here: http://docs.jboss.org/hibernate/core/3.3/reference/en/html/batch.html , but with my benchmarking it doesn't seem any faster than before. Can anyone suggest a way to prove whether hibernate is using batch mode or not? I hear that there are numerous reasons why it may silently drop into normal mode (eg associations and generated ids) so is there some way to find out why it has gone non-batch? My hibernate.cfg.xml contains this line which i believe is all i need to enable batch mode: <property name="jdbc.batch_size">50</property> My insert code looks like this: List<LogEntry> entries = ..a list of 100 LogEntry data classes... Session sess = sessionFactory.getCurrentSession(); for(LogEntry e : entries) { sess.save(e); } sess.flush(); sess.clear(); My 'logentry' class has no associations, the only interesting field is the id: @Entity @Table(name="log_entries") public class LogEntry { @Id @GeneratedValue public Long id; ..other fields - strings and ints... However, since it is oracle, i believe the @GeneratedValue will use the sequence generator. And i believe that only the 'identity' generator will stop bulk inserts. So if anyone can explain why it isn't running in batch mode, or how i can find out for sure if it is or isn't in batch mode, or find out why hibernate is silently dropping back to slow mode, i'd be most grateful. Thanks

    Read the article

  • Invoking a SOAP ( Web Services ) from ORACLE DB

    - by Mousarules
    Dears, Kindly note that I’m trying to invoke a SOAP (web services) from ORACLE DB using pl\sql , after I have done some investigations it says that I have to use the UTL_HTTP package but It didn't work with me !!! Kindly to advice me , where should I exactly place the following SOAP in pl\SQL to be invoked .... is it posible ? SOAP 1.1 The following is a sample SOAP 1.1 request and response. The placeholders shown need to be replaced with actual values. POST /gmgwebservice/service.asmx HTTP/1.1 Host: bulk.umniah.com Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "http://tempuri.org/SendSMS" <SendSMS xmlns="http://tempuri.org/"> <UserName>string</UserName> <Password>string</Password> <MessageBody>string</MessageBody> <Sender>string</Sender> <Destination>string</Destination> </SendSMS> HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <SendSMSResponse xmlns="http://tempuri.org/"> <SendSMSResult>string</SendSMSResult> </SendSMSResponse> --This web services refers to a web site called Bulk Messaging ; the web site sends SMS to a specific mobile number by filling in some text boxes , I need it to be done from ORACLE forms when a specific action occurs ( JOB ) but I don’t know how to use it inside my pl\sql code . Hope that it’s clear ,is there something else I have to mention ?

    Read the article

  • Oracle clients dead wait

    - by Macroideal
    hi all friends I meet a problem yesterday. Maybe it's because it is April 1st... but it did exist. I have 3 PCs in remote area, two clients and one oracle server. My app is running separately in the two clients, connecting hourly to the oracle database. My clients worked well before April 1st, but suddenly my app in the client machines went down. Firstly, I did not change any configurations. I used libsqlora8 to connect to the server. I went into a dead loop in the library. I tried sqlplus, but it is dead there in my shell terminal, like it meets an infinite loop: no return until i pressed ctrl + c. The reason I guess is an "infinite loop" somewhere. BTW, when I used my local PC to connect the server, it worked well. Just from this phenomenon, we can see the problem lies in the client machine. I checked the configuration file both in local machine and client machines -they are identical Have you met a problem like this? I hope it's not due to April 1st.

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • Oracle SQL Developer v3.2.1 Now Available

    - by thatjeffsmith
    Oracle SQL Developer version 3.2.1 is now available. I recommend that everyone now upgrade to this release. It features more than 200 bug fixes, tweaks, and polish applied to the 3.2 edition. The high profile bug fixes submitted by customers and users on our forums are listed in all their glory for your review. I want to highlight a few of the changes though, as I recognize many of you lack the time and/or patience to ‘read the docs.’ That would include me, which is why I enjoy writing these kinds of blog posts. I’m lazy – just like you! No more artificial line breaks between CREATE OR REPLACE and your PL/SQL In versions 3.2 and older, when you pull up your stored procedural objects in our editor, you would see a line break inserted between the CREATE OR REPLACE and then the body of your code. In version 3.2.1, we have removed the line break. 3.1 3.2.1 Trivia Did You Know? The database doesn’t store the ‘CREATE’ or ‘CREATE OR REPLACE’ bit of your PL/SQL code in the database. If we look at the USER_SOURCE view, we can see that the code begins with the object name. So the CREATE OR REPLACE bit is ‘artificial’ The intent is to give you the code necessary to recreate your object – and have it ‘compile’ into the database. We pretty much HAVE to add the ‘CREATE OR REPLACE.’ From now on it will appear inline with the first line of your code. Exporting Tables & Views When exporting data from your tables or views, previous versions of SQL Developer presented a 3 step wizard. It allows you to choose your columns and apply data filters for what is exported. This was kind of redundant. The grids already allowed you to select your columns and apply filters. Wouldn’t it be more intuitive AND efficient to just make the grids behave in a What You See Is What You Get (WYSIWYG) fashion? In version 3.2.1, that is exactly what will happen. The wizard now only has two steps and the grid will export the data and columns as defined in the visible grid. Let the grid properties define what is actually exported! And here is what is pasted into my worksheet: "BREWERY"|"CITY" "3 Brewers Restaurant Micro-Brewery"|"Toronto" "Amsterdam Brewing Co."|"Toronto" "Ball Brewing Company Ltd."|"Toronto" "Big Ram Brewing Company"|"Toronto" "Black Creek Historic Brewery"|"Toronto" "Black Oak Brewing"|"Toronto" "C'est What?"|"Toronto" "Cool Beer Brewing Company"|"Toronto" "Denison's Brewing"|"Toronto" "Duggan's Brewery"|"Toronto" "Feathers"|"Toronto" "Fermentations! - Danforth"|"Toronto" "Fermentations! - Mount Pleasant"|"Toronto" "Granite Brewery & Restaurant"|"Toronto" "Labatt's Breweries of Canada"|"Toronto" "Mill Street Brew Pub"|"Toronto" "Mill Street Brewery"|"Toronto" "Molson Breweries of Canada"|"Toronto" "Molson Brewery at Air Canada Centre"|"Toronto" "Pioneer Brewery Ltd."|"Toronto" "Post-Production Bistro"|"Toronto" "Rotterdam Brewing"|"Toronto" "Steam Whistle Brewing"|"Toronto" "Strand Brasserie"|"Toronto" "Upper Canada Brewing"|"Toronto" JUST what I wanted And One Last Thing Speaking of export, sometimes I want to send data to Excel. And sometimes I want to send multiple objects to Excel – to a single Excel file that is. In version 3.2.1 you can now do that. Let’s export the bulk of the HR schema to Excel, with each table going to it’s own workbook in the same worksheet. Select many tables, put them in in a single Excel worksheet If you try this in previous versions of SQL Developer it will just write the first table to the Excel file. This is one of the bugs we addressed in v3.2.1. Here is what the output Excel file looks like now: Many tables - Many workbooks in an Excel Worksheet I have a sneaky suspicion that this will be a frequently used feature going forward. Excel seems to be the cornerstone of many of our popular features. Imagine that!

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • Date Tracking in Oracle HRMS

    - by Manoj Madhusoodanan
    Update Date Track Modes To maintain employee data effectively Oracle HCM is using a mechanism called date tracking.The main motive behind the date track mode is to maintain past,present and future data effectively.The various update date track modes are: CORRECTION : Over writes the data. No history will maintain.UPDATE : Keeps the history and new change will effect as of effective dateUPDATE_CHANGE_INSERT : Inserts the record and preserves the futureUPDATE_OVERRIDE : Inserts the record and overrides the future Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Action: Created Employee # 22 on 01-JAN-2012 The record in PER_ALL_PEOPLE_F is as shown below. Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-DEC-4712 24 2 Action: Updated record in CORRECTION mode Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-DEC-4712 24 Single 3 Action: Updated record in UPDATE mode effective 01-JUN-2012 and Marital Status = Married Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-MAY-2012 24 Single 4 01-JUN-2012 31-DEC-4712 24 Married 5 Action: Updated record in UPDATE mode effective 01-SEP-2012 and Marital Status = Divorced Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-MAY-2012 24 Single 4 01-JUN-2012 31-AUG-2012 24 Married 6 01-SEP-2012 31-DEC-4712 24 Divorced 7 Action: Updated record in UPDATE_CHANGE_INSERT mode effective 01-MAR-2012 and Marital Status = Living Together Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 29-FEB-2012 24 Single 8 01-MAR-2012 31-MAY-2012 24 Living Together 9 01-JUN-2012 31-AUG-2012 24 Married 6 01-SEP-2012 31-DEC-4712 24 Divorced 7 Action: Updated record in UPDATE_OVERRIDE mode effective 01-AUG-2012 and Marital Status = Divorced Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 29-FEB-2012 24 Single 8 01-MAR-2012 31-MAY-2012 24 Living Together 9 01-JUN-2012 31-JUL-2012 24 Married 10 01-AUG-2012 31-DEC-4712 24 Divorced 11  Delete Date Track Modes The various delete date track modes are ZAP : wipes all recordsDELETE : Deletes  current recordFUTURE_CHANGE : Deletes current and future changes.DELETE_NEXT_CHANGE : Deletes next change Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Element Entry records are shown below. Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 12-OCT-2012 129831 3 13-OCT-2012 19-OCT-2012 129831 5 20-OCT-2012 31-DEC-4712 129831 6 Action: Delete record in ZAP mode effective 14-JAN-2012 No rows Action: Delete record in DELETE mode effective 14-OCT-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 12-OCT-2012 129831 3 13-OCT-2012 14-OCT-2012 129831 6 Action: Delete record in FUTURE_CHANGE mode effective 14-JAN-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 31-DEC-4712 129831 4 Action: Delete record in NEXT_CHANGE mode effective 14-JAN-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 19-OCT-2012 129831 4 20-OCT-2012 31-DEC-4712 129831 6

    Read the article

  • ERROR: Not enough space?

    - by dsmoljanovic
    Now this is a very unspecific question. I'm trying to figure out what this message would mean. Here is the story behind it: I'm installing Oracle enterprise manager cloud control (12c r3) on Solaris 10 (5/09). Installer opens up, i enter all needed information and at the last step click Install. It immediately crashes with only "ERROR: Not enough space" written in log and console and nothing else. Now, this could be java error or Solaris error? I'm thinking it's happening either when it starts to copy files or when it tries to launch a process that would do that. What space is it referring to? disk (have ehough), swap (also), memory (yep)... Any ideas are helpful. Edit: i found this exception in the oraInventory logs: oracle.sysman.oii.oiic.OiicInstallAPIException: Not enough space at oracle.sysman.oii.oiic.OiicAPIInstaller.initInstallSession(OiicAPIInstaller.java:2165) at oracle.sysman.oii.oiic.OiicAPIInstaller.initOUIAPISession(OiicAPIInstaller.java:790) at oracle.sysman.install.oneclick.EMGCOUIInstaller.prepareForInstall(EMGCOUIInstaller.java:676) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext$1.run(EMGCSummaryDlgonNext.java:243) at java.lang.Thread.run(Thread.java:662) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext.actionsOnClickofNext(EMGCSummaryDlgonNext.java:1067) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at oracle.sysman.install.oneclick.EMGCUtil.performonClickOfNextForClass(EMGCUtil.java:399) at oracle.sysman.install.oneclick.EMGCUtil.performPageLevelValidationsForSilentInstall(EMGCUtil.java:367) at oracle.sysman.install.oneclick.EMGCInstaller.prepareForSilentInstall(EMGCInstaller.java:1459) at oracle.sysman.install.oneclick.EMGCInstaller.main(EMGCInstaller.java:1553) disk status: bash-3.00$ df -h /tmp Filesystem size used avail capacity Mounted on swap 8.1G 2.7G 5.4G 33% /tmp bash-3.00$ df -h /u01 Filesystem size used avail capacity Mounted on / 275G 28G 244G 11% / swap: root@gs12emcc # swap -s total: 18306040k bytes allocated + 3837808k reserved = 22143848k used, 5712664k available

    Read the article

  • Upgrade 10g Osso to 11g OAM (Part 2)

    - by Pankaj Chandiramani
    This is part 2 of http://blogs.oracle.com/pankaj/2010/11/upgrade_10g_osso_to_11g_oam.html So last post we saw the overview of upgrading osso to oam11g . Now some more details on same . As we are using the co-existence feature , we have to install the OAM server and upgrade the existing OSSO 10g server to the OAM servers. OAM Upgrade Steps Overview Pre-Req : You already have a OAM 11g Installed Upgrade Step 1: Configure User Store & Make it Primary Upgrade Step 2: Create Policy Domain , this is dome by UA automatically Upgrade Step 3: Migrate Partners : This is done by running Upgrade Assistant Verify successful Upgrade Details on UA step : To Upgrade the existing OSSO 10g servers to OAM server , this is done by running the UA script in OAM , which copies over all the partner app details from osso to OAM 11g , run_ua.sh is the script name which will ask you to input the Policies.properties from SSO $OH/sso/config folder of osso 10g & other variables like db password . Some pointers Upgrading oso to Oam 11g , by default enables the coexistence mode on the OAM Server Front-end the OAM server with the same Load Balancer that is the front end of the OSSO 10g servers. Now, OAM and OSSO 10g servers are working in a co-exist mode. OAM 11g is made to understand 10g OSSO Token format and session handling capabilities so as to co-exist with 10g OSSO servers./li How to test ? Try to access the partner applications and verify that single sign on works. Also, verify that user does not have to login in if the user is already authenticated by either OAM or OSSO 10g server. Screen-shots & Troubleshooting tips to be followed .......

    Read the article

  • Il CRM è al passo con i tempi?

    - by antonella.buonagurio(at)oracle.com
    Il Social Customer Relationship Management è nato grazie alla rivoluzione portata dal Web 2.0, un cambiamento epocale nelle modalità di comunicazione che ha aggiunto una incredibile ricchezza alle conversazioni tra aziende e consumatori. Le aziende dispongono adesso di strumenti per comprendere il proprio mercato senza precedenti, i consumatori, a loro volta, hanno il potere di utilizzare nuovi canali per esprimere le proprie esigenze e per comunicare e condividere commenti ed esperienze. Ma il Web 2.0 non è il solo fattore che impatta sulle scelte strategiche in ambito CRM  che ogni azienda deve considerare per sostenere  questo nuovo rapporto con i propri consumatori.    Vuoi scoprire quali sono le forze (o fattori) che le aziende devono considerare affinchè i processi di gestione della relazione con i clienti stiano al passo con le mutate condizioni sociali ed economiche?   Per saperne di più:   Il whitepaper realizzato da Oracle, Paul Gillin ed  IT Business Edge  ne delinea alcuni: 1.      Il Business. Come è cambiato in funzione dell'esperienza multicanale ora possible, della centralità del cliente e dei social networking che dominano le relazioni on line? 2.      La tecnologiaLe aziende oggi per guadagnare vantaggio competitivo devono dotarsi delle più innovative tecnologie per dare maggior valore al proprio business e per ridurre al minimo i costi di infrastruttura. Quali sono e quali sono gli effettivi vantaggi?   e altri ancora ...... leggendo il white paper "Is your CRM solution keeping up with the times?"

    Read the article

  • New User of UPK?

    - by [email protected]
    The UPK Developer comes with a variety of manuals to help support your organization in the development and deployment of content. The Developer manuals can be found in the \Documentation\Language Code\Reference folder where the Developer has been installed. As of 3.5.x the documentation can also be accessed via the Start menu, Start\Programs\User Productivity Kit\Documentation\Reference. Content Deployment.pdf: This manual provides information on how to deploy content to your audience. Content Development.pdf: This manual provides information on how to create, maintain, and publish content using the Developer. The content of this manual also appears in the Developer help system. Content Player.pdf: This manual provides instructions on how to view content using the Player. The content of this manual also appears in the Player help system. In-Application Support Guide.pdf: This manual provides information on how implement content-sensitive, in-application support for enterprise applications using Player content. Installation & Administration.pdf: This manual provides instructions for installing the Developer in a single-user or multi-user environment as well as information on how to add and manage users and content in a multi-user installation. An Administration help system also appears in the Developer for authors configured as administrators. This manual also provides instructions for installing and configuring Usage Tracking. Upgrade.pdf: This manual provides information on how to upgrade from a previous version to the current version. Usage Tracking Administration & Reporting.pdf: This manual provides instructions on how to manage users and usage tracking reports. - Kathryn Lustenberger, Oracle UPK Outbound Product Management

    Read the article

  • The Importance of Collaboration, Analytics, and Mobile Technologies for Modern HR

    - by HCM-Oracle
    It was 17 years ago, when a McKinsey study uncovered the “war for talent”. Today, it is no point of contention that a strong talent-centric strategy maybe the most important focus for organizations. A talent-centric organization aims at recruiting, retaining and developing the best talent.  The best employees will be able to adapt responsibilities and be able to come up with solutions to solve problems, which are important skills in today’s dynamic work environment, and arguably more important in this recessionary climate.   The notion of hiring and retaining talented employees for organizational sustainability and competitive advantage is not a new concept. But can organizations consider themselves as having a “talent-centric” strategy without up-to-date collaboration tools, HR analytics and mobile technologies in pursuit of attracting, hiring and retaining the best talent? Attend the Upcoming Webcast A webcast on June 19th at 3pm EST will reveal more results of the study. Based on original research done in collaboration between Oracle HCM and HCI, we unveil new findings that explore how critical collaboration, analytic insights and mobile technology are for supporting a talent-centric work environment. You will learn: What are the benefits to being talent-centric? How does collaboration via social networks, analytics with predictive insights and mobile technologies support the talent-centric strategy of an organization? What is the state of play for these technologies? Register Here 

    Read the article

  • Executive Edge: It's the end of work as we know it

    - by Naresh Persaud
    If you are at Oracle Open World, it has been an exciting couple of days from Larry's keynote to the events at the Executive Edge. The CSO Summit was included as a program within the Executive Edge this year. The day started with a great presentation from Joel Brenner, author of "America The Vulnerable", as he discussed the impact of state sponsored espionage on businesses. The opportunity for every business is to turn security into a business advantage. As we enter an in-hospitable security climate, every business has to adapt to the security climate change.  Amit Jasuja's presentation focused on how customers can secure the new digital experience. As every sector of the economy transforms to adapt to changing global economic pressures, every business has to adapt. For IT organizations, the biggest transformation will involve cloud, mobile and social. Organizations that can get security right in the "new work order" will have an advantage. It is truly the end of work as we know it.  The "new work order" means working anytime and anywhere. The office is anywhere we want it to be because work is not a place it is an activity. Below is a copy of Amit Jasuja's presentation. Csooow12 amit-jasuja-securing-new-experience6 from OracleIDM

    Read the article

  • Come integrare in modo smart processi di vendita e produzione?

    - by Claudia Caramelli-Oracle
    L’innovazione tecnologica ha trasformato il modo in cui i clienti interagiscono con le aziende. Inoltre, gli attuali scenari di mercato richiedono attenzione ed efficacia nella vendita per mantenere massima competitività. Per ottenere le migliori performance di vendita è necessario accelerare e automatizzare i processi di scambio informazioni tra i dipartimenti commerciali e produttivi, minimizzando tempi di attesa per ottenere dati tecnici e autorizzazioni alla fattibilità, riducendo i colli di bottiglia e i possibili errori umani attraverso un processo di controllo e omologazione dell’offerta.Gli sponsor dell’evento ti attendono l'11 giugno presso la prestigiosa sede dell’Unione Industriale di Torino per scoprire come: Ridurre il ciclo di vendita, facendo efficienza sull’intero processo di vendita Minimizzare gli impatti da turnover del personale di vendita Migliorare il value to promise Ottenere una migliore fidelizzazione e soddisfazione dei propri clienti, riducendone lo switching Assistere dal vivo ad una dimostrazione pratica di Oracle, leader mondiale nell’ambito delle soluzioni di CPQ (Configure, Price and Quoting) nell’utilizzo di uno strumento veloce, facile da utilizzare, che permetta una gestione smart della configurazione commerciale dell’offerta B2B anche con l’ausilio di accesso mobile e cruscotti direzionali. Scoprire come altre aziende abbiano adottato con successo queste soluzioni di business. La partecipazione all'evento è gratuita ma con capienza limitata, iscriviti subito per assicurarti la partecipazione: CLICCA QUI per registrarti. Se hai bisogno di maggiori informazioni scrivi a Silvia Valgoi.

    Read the article

  • EBS Seed Data Comparison Reports Now Available

    - by Steven Chan (Oracle Development)
    Earlier this year we released a reporting tool that reports on the differences in E-Business Suite database objects between one release and another.  That's a very useful reference, but EBS defaults are delivered as seed data within the database objects themselves. What about the differences in this seed data between one release and another? I'm pleased to announce the availability of a new tool that provides comparison reports of E-Business Suite seed data between EBS 11.5.10.2, 12.0.4, 12.0.6, 12.1.1, and 12.1.3.  This new tool complements the information in the data model comparison tool.  You can download the new seed data comparison tool here: EBS ATG Seed Data Comparison Report (Note 1327399.1) The EBS ATG Seed Data Comparison Report provides report on the changes between different EBS releases based upon the seed data changes delivered by the product data loader files (.ldt extension) based on EBS ATG loader control (.lct extension) files.  You can use this new tool to report on the differences in the following types of seed data: Concurrent Program definitions Descriptive Flexfield entity definitions Application Object Library profile option definitions Application Object Library (AOL) key flexfield, function, lookups, value set definitions Application Object Library (AOL) menu and responsibility definitions Application Object Library messages Application Object Library request set definitions Application Object Library printer styles definitions Report Manager / WebADI component and integrator entity definitions Business Intelligence Publisher (BI Publisher) entity definitions BIS Request Set Generator entity definitions ... and more Your feedback is welcomeThis new tool was produced by our hard-working EBS Release Management team, and they're actively seeking your feedback.  Please feel free to share your experiences with it by posting a comment here.  You can also request enhancements to this tool via the distribution list address included in Note 1327399.1.Related Articles Oracle E-Business Suite Release 12.1.3 Now Available New Whitepaper: Upgrading EBS 11i Forms + OA Framework Personalizations to EBS 12 EBS 12.0 Minimum Requirements for Extended Support Finalized Five Key Resources for Upgrading to E-Business Suite Release 12 E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 1 Now Available New Whitepaper: Planning Your E-Business Suite Upgrade from Release 11i to 12.1

    Read the article

  • Hosted EBS 11i Integration Repository Temporarily Offline

    - by Steven Chan (Oracle Development)
    Most developers know that they can integrate their external applications with the E-Business Suite via the business service interfaces and SOA service endpoints documented in the E-Business Suite's Integration Repository.  This is shipped as part of EBS 12.  Until recently, it was provided as a hosted environment on the Oracle.com domain for EBS 11i. Unfortunately, we identified some standards-related issues in the process of switching from the existing server that hosts the EBS 11i environment to a new one, notably in the area of accessibility. Some of those issues will require coding changes to resolve.  Given our focus on EBS 12.2 right now, it may take some time to prioritize this relative to our other existing commitments. In the meantime, we are required to suspend access to the EBS 11i Integration Repository.  I don't have a firm schedule for getting this back online yet, but you're welcome to monitor or subscribe to this blog. I'll post updates here as soon as soon as they're available.    Related Articles Integration Repository for the E-Business Suite New Whitepaper: Primer on Integrating with EBS 12 with Other Applications

    Read the article

  • How to start WebLogic Server using default scripts?

    - by Luz Mestre-Oracle
    There are a few common issues reported when starting weblogic server using scripts. 1. User is not able to access weblogic console. 2. After a few days/hours weblogic server stops abruptly. 3. When user closes putty, they are not able to connect to weblogic server anymore. 4. When user closes windows command prompt, they are not able to connect to weblogic server anymore. 5. Weblogic is started using startManagedWebLogic.cmd/startManagedWebLogic.sh. By default, WebLogic Server does not run in background mode, so after you close the window the process finishes as well. In Linux/Unix based platforms, you need to use: nohup ./startManagedWebLogic.sh <Server> <URL> & In Windows platforms, you need to start Managed Servers using Windows Services: How to Install MS Windows Services For FMW 11g WebLogic Domain Admin and Managed Servers (Doc ID 1060058.1) http://docs.oracle.com/cd/E23943_01/web.1111/e13708/winservice.htm There a few more reasons that could cause similar symptoms, like JVM crash, signals sent by the Operating System, and many other reasons.  But the above steps is the first one to start. Enjoy!

    Read the article

  • Social Media Stations for Partners

    - by Oracle OpenWorld Blog Team
    By Stephanie Spada One of our exciting additions to this year’s Oracle Partner Network Exchange @ OpenWorld are Social Media Stations.  Partners have the opportunity to get customized, face-to-face expert advice on how they can better engage their customers and find new prospects online using social media tools.When: Sunday, September 30Time: 3:00 p.m.–5:00 p.m.Where: Moscone South, Esplanade levelWhen: Monday, October 1Time:  9:30 a.m.–6:00 p.m.Where: Moscone South, OPN Lounge, Exhibitor levelEach customized social media consultation will take only 25 minutes. Here’s how it works:·    Partners check in with a Social Media Rally coordinator who will assess needs and make the right connections for each session·    Partners go to the Photo Station, where a headshot will be taken that can be used on social profiles, Websites or for articles and posts across the Web·    Partners meet with the One-2-One consultants who will walk them through how they’re using social media today and what next steps could beSocial media channels/methods discussed can include Google+, Google Alerts, Google Analytics, Facebook, LinkedIn, Search Engine Optimization, Twitter, and more.  With so many choices, partners can decide how to focus their time.To get the most out of the Social Media Stations, partners should:·    Wear appropriate attire for the headshot photo·    Bring log-in information for social platforms they want to discuss·    Come prepared with questions for the One-2-One consultation so session time can be maximizedFor questions, or to schedule a session ahead of time, partners should send an email to: [email protected].

    Read the article

  • Procurement and E-Business Suite Product Analyzers .. Can you use this tool to resolve your SR?

    - by LindaJ-Oracle
    Procurement and E-Business Suite Product Analyzers (Doc ID 1545562.1). Analyzers are Query/Read only tools with easy to read html output. The tools are delivered by EBS Support via My Oracle Support documents ids for ease of use. The Analyzer scripts are meant to be part of your Production maintenance program by your Sysadmin, or to designated end users. The result set is an easy to read html output that provides recommendations, solutions and early warnings to of items that should be reviewed and correct. Each analyzer can be ran on demand or scheduled for repeatability and emailed to critical reviewers. There are several Analyzers available for E-Business Suite Applications Technology Group, Financials, and Manufacturing including some of the following topics.  Review them all at (Doc ID 1545562.1). Workflow Concurrent Processing Clone Log Parser Utility (Rapid Clone) Invoices, Payments, Accounting, Suppliers and EBTax Validate Data before Period Close EBTax Setup Payables Trial Balance Internet Expenses AutoInvoice Post-Process ASCP Performance PO Approval iProcurement Items For the Procurement specific Analyzers access them directly at: R12 IP Item Analyzer Diagnostic Script (Doc ID 1586248.1) R12: PO Approval Analyzer Diagnostic Script (Doc ID 1525670.1)

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >