Search Results

Search found 8943 results on 358 pages for 'audit tables'.

Page 152/358 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Multiple Foreign keys to a single table and single key pointing to more than one table

    - by user1216775
    I need some suggestions from the database design experts here. I have around six foreign keys into a single table (defect) which all point to primary key in user table. It is like: defect (.....,assigned_to,created_by,updated_by,closed_by...) If I want to get information about the defect I can make six joins. Do we have any better way to do it? Another one is I have a states table which can store one of the user-defined set of values. I have defect table and task table and I want both of these tables to share the common state table (New, In Progress etc.). So I created: task (.....,state_id,type_id,.....) defect(.....,state_id,type_id,...) state(state_id,state_name,...) importance(imp_id,imp_name,...) There are many such common attributes along with state like importance(normal, urgent etc), priority etc. And for all of them I want to use same table. I am keeping one flag in each of the tables to differentiate task and defect. What is the best solution in such a case? If somebody is using this application in health domain, they would like to assign different types, states, importances for their defect or tasks. Moreover when a user selects any project I want to display all the types,states etc under configuration parameters section.

    Read the article

  • When to use CTEs to encapsulate sub-results, and when to let the RDBMS worry about massive joins.

    - by IanC
    This is a SQL theory question. I can provide an example, but I don't think it's needed to make my point. Anyone experienced with SQL will immediately know what I'm talking about. Usually we use joins to minimize the number of records due to matching the left and right rows. However, under certain conditions, joining tables cause a multiplication of results where the result is all permutations of the left and right records. I have a database which has 3 or 4 such joins. This turns what would be a few records into a multitude. My concern is that the tables will be large in production, so the number of these joined rows will be immense. Further, heavy math is performed on each row, and the idea of performing math on duplicate rows is enough to make anyone shudder. I have two questions. The first is, is this something I should care about, or will SQL Server intelligently realize these rows are all duplicates and optimize all processing accordingly? The second is, is there any advantage to grouping each part of the query so as to get only the distinct values going into the next part of the query, using something like: WITH t1 AS ( SELECT DISTINCT... [or GROUP BY] ), t2 AS ( SELECT DISTINCT... ), t3 AS ( SELECT DISTINCT... ) SELECT... I have often seen the use of DISTINCT applied to subqueries. There is obviously a reason for doing this. However, I'm talking about something a little different and perhaps more subtle and tricky.

    Read the article

  • Hibernate Relationship Mapping/Speed up batch inserts

    - by manyxcxi
    I have 5 MySQL InnoDB tables: Test,InputInvoice,InputLine,OutputInvoice,OutputLine and each is mapped and functioning in Hibernate. I have played with using StatelessSession/Session, and JDBC batch size. I have removed any generator classes to let MySQL handle the id generation- but it is still performing quite slow. Each of those tables is represented in a java class, and mapped in hibernate accordingly. Currently when it comes time to write the data out, I loop through the objects and do a session.save(Object) or session.insert(Object) if I'm using StatelessSession. I also do a flush and clear (when using Session) when my line count reaches the max jdbc batch size (50). Would it be faster if I had these in a 'parent' class that held the objects and did a session.save(master) instead of each one? If I had them in a master/container class, how would I map that in hibernate to reflect the relationship? The container class wouldn't actually be a table of it's own, but a relationship all based on two indexes run_id (int) and line (int). Another direction would be: How do I get Hibernate to do a multi-row insert?

    Read the article

  • Return the Column Name for row with last non-null value "Ms Access 2007"

    - by bri1969
    I have a Table, which contains a list of league players. Each season, we record their Points per Dart. Their total PPD for that season is stored in other tables and extracted through other queries, which in turn are imported to the master table "Player History" at the end of the season for use as historical data. The current query retrieves each players PPD for each season they played, when they played last, and how many seasons played. The code for Last season Played has become too long and unstable to use. it was originally created, and split into two separate columns because a single SQL was to long. (LSP1) and LSP2) which work, but as I add seasons, Access does not like the length of code. In short, i need to find a more simple code that will look at each row, and look in that row for the last non null cell and report which column that last non null value is in. So if a player played seasons 30 & 31, but did not play 32..but did play 33, the Column with the code should be titled Last Season Played, and for that Player, it would state "33" in that cell, indicating that this player last played season "33" I will provide both tables and the query.. Please help

    Read the article

  • Multiple collections tied to one base collection with filters and eventing

    - by damienc88
    I have a complex model served from my back end, which has a bunch of regular attributes, some nested models, and a couple of collections. My page has two tables, one for invalid items, and one for valid items. The items in question are from one of the nested collections. Let's call it baseModel.documentCollection, implementing DocumentsCollection. I don't want any filtration code in my Marionette.CompositeViews, so what I've done is the following (note, duplicated for the 'valid' case): var invalidDocsCollection = new DocumentsCollection( baseModel.documentCollection.filter(function(item) { return !item.isValidItem(); }) ); var invalidTableView = new BookIn.PendingBookInRequestItemsCollectionView({ collection: app.collections.invalidDocsCollection }); layout.invalidDocsRegion.show(invalidTableView); This is fine for actually populating two tables independently, from one base collection. But I'm not getting the whole event pipeline down to the base collection, obviously. This means when a document's validity is changed, there's no neat way of it shifting to the other collection, therefore the other view. What I'm after is a nice way of having a base collection that I can have filter collections sit on top of. Any suggestions?

    Read the article

  • undefined method `code' for nil:NilClass message with rails and a legacy database

    - by Jude Osborn
    I'm setting up a very simple rails 3 application to view data in a legacy MySQL database. The legacy database is mostly rails ORM compatible, except that foreign key fields are pluralized. For example, my "orders" table has a foreign key field to the "companies" table called "companies_id" (rather than "company_id"). So naturally I'm having to use the ":foreign_key" attribute of "belongs_to" to set the field name manually. I haven't used rails in a few years, but I'm pretty sure I'm doing everything right, yet I get the following error when trying to access "order.currency.code": undefined method `code' for nil:NilClass This is a very simple application so far. The only thing I've done is generate the application and a bunch of scaffolds for each of the legacy database tables. Then I've gone into some of the models to make adjustments to accommodate the above mentioned difference in database naming conventions, and added some fields to the views. That's it. No funny business. So my database tables look like this (relevant fields only): orders ------ id description invoice_number currencies_id currencies ---------- id code description My Order model looks like this: class Order < ActiveRecord::Base belongs_to :currency, :foreign_key=>'currencies_id' end My Currency model looks like this: class Currency < ActiveRecord::Base has_many :orders end The relevant view snippet looks like this: <% @orders.each do |order| %> <tr> <td><%= order.description %></td> <td><%= order.invoice_number %></td> <td><%= order.currency.code %></td> </tr> <% end %> I'm completely out of ideas. Any suggestions?

    Read the article

  • Social Networking & Network Affiliations

    - by Code Sherpa
    Hi. I am in the process of planning a database for a social networking project and stumbled upon this url which is a (crude) reverse engineered guess at facebook's schema: http://www.flickr.com/photos/ikhnaton2/533233247/ What is of interest to me is the notion of "Affiliations" and I am trying to fully understand how they work, technically speaking. Where I am somewhat confused is the NetworkID column in the FacebookGroups", "FacebookEvent", and "Affiliations" tables (NID in Affiliations). How are these network affiliations interconnected? In my own project, I have a simple profile table: CREATE TABLE [dbo].[Profiles]( [profileid] [int] IDENTITY(1,1) NOT NULL, [userid] [uniqueidentifier] NOT NULL, [username] [varchar](255) COLLATE Latin1_General_CI_AI NOT NULL, [applicationname] [varchar](255) COLLATE Latin1_General_CI_AI NOT NULL, [isanonymous] [bit] NULL, [lastactivity] [datetime] NULL, [lastupdated] [datetime] NULL, CONSTRAINT [PK__Profiles__1DB06A4F] PRIMARY KEY CLUSTERED ( [profileid] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY], CONSTRAINT [PKProfiles] UNIQUE NONCLUSTERED ( [username] ASC, [applicationname] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] One profile can have many affiliations. And one affiliation can have many profiles. And I would like to design it in such a way that relationships between affiliations tells me something about the associated profiles. In fact, based on the affiliations that users select, I would like to know how to infer as many things as possible about that person. My question is, how should I be designing my network affiliation tables and how do they operate per my above requirements? A rough SQL schema would be appreciated in your response. Thanks in advance...

    Read the article

  • The best way to return related data in a SQL statement

    - by Darvis Lombardo
    I have a question on the best method to get back to a piece of data that is in a related table on the other side of a many-to-many relationship table. My first method uses joins to get back to the data, but because there are multiple matching rows in the relationship table, I had to use a TOP 1 to get a single row result. My second method uses a subquery to get the data but this just doesn't feel right. So, my question is, which is the preferred method, or is there a better method? The script needed to create the test tables, insert data, and run the two queries is below. Thanks for your advice! Darvis -------------------------------------------------------------------------------------------- -- Create Tables -------------------------------------------------------------------------------------------- DECLARE @TableA TABLE ( [A_ID] [int] IDENTITY(1,1) NOT NULL, [Description] [varchar](50) NULL) DECLARE @TableB TABLE ( [B_ID] [int] IDENTITY(1,1) NOT NULL, [A_ID] [int] NOT NULL, [Description] [varchar](50) NOT NULL) DECLARE @TableC TABLE ( [C_ID] [int] IDENTITY(1,1) NOT NULL, [Description] [varchar](50) NOT NULL) DECLARE @TableB_C TABLE ( [B_ID] [int] NOT NULL, [C_ID] [int] NOT NULL) -------------------------------------------------------------------------------------------- -- Insert Test Data -------------------------------------------------------------------------------------------- INSERT INTO @TableA VALUES('A-One') INSERT INTO @TableA VALUES('A-Two') INSERT INTO @TableA VALUES('A-Three') INSERT INTO @TableB (A_ID, Description) VALUES(1,'B-One') INSERT INTO @TableB (A_ID, Description) VALUES(1,'B-Two') INSERT INTO @TableB (A_ID, Description) VALUES(1,'B-Three') INSERT INTO @TableB (A_ID, Description) VALUES(2,'B-Four') INSERT INTO @TableB (A_ID, Description) VALUES(2,'B-Five') INSERT INTO @TableB (A_ID, Description) VALUES(3,'B-Six') INSERT INTO @TableC VALUES('C-One') INSERT INTO @TableC VALUES('C-Two') INSERT INTO @TableC VALUES('C-Three') INSERT INTO @TableB_C (B_ID, C_ID) VALUES(1, 1) INSERT INTO @TableB_C (B_ID, C_ID) VALUES(2, 1) INSERT INTO @TableB_C (B_ID, C_ID) VALUES(3, 1) -------------------------------------------------------------------------------------------- -- Get result - method 1 -------------------------------------------------------------------------------------------- SELECT TOP 1 C.*, A.Description FROM @TableC C JOIN @TableB_C BC ON BC.C_ID = C.C_ID JOIN @TableB B ON B.B_ID = BC.B_ID JOIN @TableA A ON B.A_ID = A.A_ID WHERE C.C_ID = 1 -------------------------------------------------------------------------------------------- -- Get result - method 2 -------------------------------------------------------------------------------------------- SELECT C.*, (SELECT A.Description FROM @TableA A WHERE EXISTS (SELECT * FROM @TableB_C BC JOIN @TableB B ON B.B_ID = BC.B_ID WHERE BC.C_ID = C.C_ID AND B.A_ID = A.A_ID)) FROM @TableC C WHERE C.C_ID = 1

    Read the article

  • What would be the Better db design for the old db structure?

    - by yawok
    i've a old database where i store the data of the holidays and dates in which they are celebrated.. id country hdate description link 1 Afghanistan 2008-01-19 Ashura ashura 2 Albania 2008-01-01 New Year Day new-year the flaws in the above structure is that, i repeat the data other than date for every festival and every year and every country.. For example, I store a new date for 2009 for ashura and afghanistan .. I tried to limit the redundancy and split the tables as countries (id,name) holidays (id, holiday, celebrated_by, link) // celebrated_by will store the id's of countries separated by ',' holiday_dates (holiday_id, date, year) // date will the full date and year will be as 2008 or 2009 Now i have some problems with the structure too.. consider that i store the holiday like Independence day , its common for more countries but will have different dates. so how to handle this and and the link will have to be different too.. And i need to list the countries which celebrates the same holiday and also when i describe about a single holiday i need to list all the other holidays that country would be celebrating.. And the most of all , i already have huge amount of data in the old tables and i need to split it to the new one once the new design is finalized... Any ideas?

    Read the article

  • What db fits me?

    - by afvasd
    Dear Everyone I am currently using mysql. I am finding that my schema is getting incredibly complicated. I seek to find a new db that will suit my needs: Let's assume I am building a news aggregrator (which collects news from multiple website). I then run algorithms to determine if two news from different sites are actually referring to the same topic. I run this algorithm to cluster news together. The relationship is depicted below: cluster \--news1 \--word1 \--word2 \--news2 \--word3 \--news3 \--word1 \--word3 And then I will apply some magic and determine the importance of each word. Summing all the importance of each word gives me the importance of a news article. Summing the importance of each news article gives me the importance of a cluster. Note that above cluster there are also subgroups( like split by region etc), and categories (like sports, etc) which I have to determine the importance of that in a particular day per se. I have used views in the past to do so, but I realized that views are very slow. So i will normally do an insert into an actual table and index them for better performance. As you can see this leads to multiple tables derived like (cluster, importance), (news, importance), (words, importance) etc which can get pretty messy. Also the "importance" metric will change. It has become increasingly difficult to alter tables, update data (which I am using TRUNCATE TABLE) and then inserting from null. I am currently looking into something schemaless like Mongodb. I do not need distributedness. I would very much want something that is reasonably fast (which can be indexed) and something that is a lot more flexible that traditional RDMBS. Also, I need something that has some kind of ORM because I personally like ORM a lot. I am currently using sqlalchemy Please help!

    Read the article

  • Java ORM related question - SQL Vs Google DB (Big Table?) GAE

    - by StackerFlow
    I was wondering about the following two options when one is not using SQL tables but ORM based DBs (Example - when you are using GAE) Would the second option be less efficient? Requirement: There is an object. The object has a collection of similar items. I need to store this object. Example, say the object is a tree and it has a collection of leaves. Option 1: Traditional SQL type structure: Table for the Tree (with TreeId as the identifier for a row in the Table.) Table for the Leaves (where each leaf has a TreeId and to show the leaves of a tree, I query all leaves where the TreeId is the Id of the tree.) Here, the Tree structure DOES NOT have a field with leaves. Option 2: ORM / GAE Tables: Using the same example above, I have an object for Tree where the object has a collection (Set/List in Java/C++) of leaves. I store and retrieve the Tree together with the leaves (as the leaves are implemented as a Set in the Tree object) My question is, will the second one be less efficient that the first option? If so, why? Are there other alternatives? Thank you!

    Read the article

  • How to make cakePHP retreive the data represented by a foreign key?

    - by XL
    Greetings cake experts, I have a question that I think would really help a lot of people getting started with cakePHP. I have a feeling it will be easy for some of you, but it is quite challenging to me. I have a simple database with multiple tables. I can't figure out how to make cakePHP display the values associated with a foreign key in an index view. Or create a view where the fields of my choice (the ones that make sense to users like location name - not location_id can be updated or viewed on a single page). I have created an example at http://lovecats.cakeapp.com that illustrate the question. If you look at the page and click the "list cats", you will notice that it shows the location_id field from the locations table. You will also notice that when you click "add cats", you must choose a location_id from the locations table. This is the automagic way that cakePHP builds the app. I want this to be the field location_name. The database is setup so that the table cats has a foreign key called location_id that has a relationship to a table called locations. This is my problem: I want these pages to display the location_name instead of the location_id. If you want to login to the application, you can go to http://cakeapp.com/sqldesigners/sql/lovecats and the password 'password' to look at the db relationships, etc. How do I have a page that shows the fields that I want? And is it possible to create a page that updates fields from all of the tables at once? This is the slice of cake that I have been trying to figure out and this would REALLY get me over a hump. You can download the app and sql from the above url.

    Read the article

  • # how to split the column values using stored procedure

    - by user1444281
    I have two tables table 1 is SELECT * FROM dbo.TBL_WD_WEB_DECK WD_ID WD_TITLE ------------------ 1 2 and data in 2nd table is WS_ID WS_WEBPAGE_ID WS_SPONSORS_ID WS_STATUS ----------------------------------------------------- 1 1 1,2,3,4 Y I wrote the following stored procedure to insert the data into both the tables catching the identity of main table dbo.TBL_WD_WEB_DECK. WD_ID is related with WS_WEBPAGE_ID. I wrote the cursor in update action to split the WS_SPONSORS_ID column calling the split function in the cursor. But it is not working. The stored procedure is: ALTER procedure [dbo].[SP_Example_SPLIT] ( @action char(1), @wd_id int out, @wd_title varchar(50), @ws_webpage_id int, @ws_sponsors_id varchar(250), @ws_status char(1) ) as begin set nocount on IF (@action = 'A') BEGIN INSERT INTO dbo.TBL_WD_WEB_DECK(WD_TITLE)VALUES(@WD_TITLE) DECLARE @X INT SET @X = @@IDENTITY INSERT INTO dbo.TBL_WD_SPONSORS(WS_WEBPAGE_ID,WS_SPONSORS_ID,WS_STATUS) VALUES (@ws_webpage_id,@ws_sponsors_id,@ws_status) END ELSE IF (@action = 'U') BEGIN UPDATE dbo.TBL_WD_WEB_DECK SET WD_TITLE = @wd_title WHERE WD_ID = @WD_ID UPDATE dbo.TBL_WD_SPONSORS SET WS_STATUS = 'N' where WS_SPONSORS_ID not in (@ws_sponsors_id) and ws_webpage_id = @wd_id BEGIN /* Declaring Cursor to split the value in ws_sponsors_id column */ Declare @var int Declare splt cursor for /* used the split function and calling the parameter in that split function */ select * from iter_simple_intlist_to_tbl(@WS_SPONSORS_ID) OPEN splt FETCH NEXT FROM splt INTO @var WHILE (@@FETCH_STATUS = 0) begin if not exists(select * from dbo.TBL_WD_SPONSORS where WS_WEBPAGE_ID = @wd_id and WS_SPONSORS_ID = @var) begin insert into dbo.TBL_WD_SPONSORS (WS_WEBPAGE_ID,WS_SPONSORS_ID) values(@wd_id,@var) end end CLOSE SPONSOR DEALLOCATE SPONSOR END END END The result I want is if I insert the data in WD_ID and IN WS_SPONSORS_ID column the data in the WS_SPONSORS_ID column should split and I need to compare it with WD_ID. The result I need is: WD_ID WD_TITLE --------------------- 1 TEST 2 TEST1 3 WS_ID WS_WEBPAGE_ID WS_SPONSORS_ID WS_STATUS -------------------------------------------------------- 1 1 1 Y 2 1 2 N 3 1 3 Y If I pass the string in WS_SPONSORS_ID as 1,2,3 it has to split like the above. Can you help?

    Read the article

  • MySQL Check and compare and if necessary create data.

    - by user2979677
    Hello I have three different tables Talk ID type_id YEAR NUM NUM_LETTER DATE series_id TALK speaker_id SCRIPREF DONE DOUBLE_SID DOUBLE_TYP LOCAL_CODE MISSING RESTRICTED WHY_RESTRI SR_S_1 SR_E_1 BCV_1 SR_S_2 SR_E_2 BCV_2 SR_S_3 SR_E_3 BCV_3 SR_S_4 SR_E_4 BCV_4 QTY_IV organisation_id recommended topic_id thumbnail mp3_file_size duration version Product_component id product_id talk_id position version Product id created product_type_id last_modified last_modified_by num_sold current_stock min_stock max_stock comment organisation_id series_id name subscription_type_id recipient_id discount_start discount_amount discount_desc discount_finish discount_percent voucher_amount audio_points talk_id price_override product_desc instant_download_status_id downloads instant_downloads promote_start promote_finish promote_desc restricted from_tape discontinued discontinued_reason discontinued_date external_url version I want to create a procedure that will check if select the id from talk and compare it with the id of product to see if there is a product id in the table and if there isn't then create it however my problem is that my tables talk and product can't talk, as id in talk is related to talk_id in product_component and id in product is related to product_id in product_component. Are there any ways for this to be done? I tried this, CREATE DEFINER=`sthelensmedia`@`localhost` PROCEDURE `CreateSingleCDProducts`() BEGIN DECLARE t_id INT; DECLARE t_restricted BOOLEAN; DECLARE t_talk CHAR(255); DECLARE t_series_id INT; DECLARE p_id INT; DECLARE done INT DEFAULT 0; DECLARE cur1 CURSOR FOR SELECT id,restricted,talk,series_id FROM talk WHERE organisation_id=2; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; UPDATE product SET restricted=TRUE WHERE product_type_id=3; OPEN cur1; create_loop: LOOP FETCH cur1 INTO t_id, t_restricted, t_talk, t_series_id; IF done=1 THEN LEAVE create_loop; END IF; INSERT INTO product (created,product_type_id,last_modified,organisation_id,series_id,name,restricted) VALUES (NOW(),3,NOW(),2,t_series_id,t_talk,t_restricted); SELECT LAST_INSERT_ID() INTO p_id; INSERT INTO product_component (product_id,talk_id,position) VALUES (p_id,t_id,0); END LOOP create_loop; CLOSE cur1; END Just wondering if anyone could help me.

    Read the article

  • I’m screwing up this LEFT JOIN

    - by Jason Shultz
    I'm doing a left join on several tables. What I want to happen is that it list all the businesses. Then, it looks through the photos, videos, specials and categories. IF there are photos, then the tables shows yes, if there are videos, it shows yes in the table. It does all that without any problems. Except for one thing. For every photo, it shows the business that many times. For example, if there are 5 photos in the DB for a business, it shows the business five times. Obviously, this is not what I want to happen. Can you help? function frontPageList() { $this->db->select('b.id, b.busname, b.busowner, b.webaddress, p.thumb, v.title, c.catname'); $this->db->from ('business AS b'); $this->db->where('b.featured', '1'); $this->db->join('photos AS p', 'p.busid = b.id', 'left'); $this->db->join('video AS v', 'v.busid = b.id', 'left'); $this->db->join('specials AS s', 's.busid = b.id', 'left'); $this->db->join('category As c', 'b.category = c.id', 'left'); return $this->db->get();

    Read the article

  • What are advantages of using a one-to-one table relationship? (MySQL)

    - by byronh
    What are advantages of using a one-to-one table relationship as opposed to simply storing all the data in one table? I understand and make use of one-to-many, many-to-one, and many-to-many all the time, but implementing a one-to-one relationship seems like a tedious and unnecessary task, especially if you use naming conventions for relating (php) objects to database tables. I couldn't find anything on the net or on this site that could supply a good real-world example of a one-to-one relationship. At first I thought it might be logical to separate 'users', for example, into two tables, one containing public information like an 'about me' for profile pages and one containing private information such as login/password, etc. But why go through all the trouble of using unnecessary JOINS when you can just choose which fields to select from that table anyway? If I'm displaying the user's profile page, obviously I would only SELECT id,username,email,aboutme etc. and not the fields containing their private info. Anyone care to enlighten me with some real-world examples of one-to-one relationships?

    Read the article

  • The Importance of a Security Assessment - by Michael Terra, Oracle

    - by Darin Pendergraft
    Today's Blog was written by Michael Terra, who was the Subject Matter Expert for the recently announced Oracle Online Security Assessment. You can take the Online Assessment here: Take the Online Assessment Over the past decade, IT Security has become a recognized and respected Business discipline.  Several factors have contributed to IT Security becoming a core business and organizational enabler including, but not limited to, increased external threats and increased regulatory pressure. Security is also viewed as a key enabler for strategic corporate activities such as mergers and acquisitions.Now, the challenge for senior security professionals is to develop an ongoing dialogue within their organizations about the importance of information security and how it can impact their organization's strategic objectives/mission. The importance of conducting regular “Security Assessments” across the IT and physical infrastructure has become increasingly important. Security standards and frameworks, such as the international standard ISO 27001, are increasingly being adopted by organizations and their business partners as proof of their security posture and “Security Assessments” are a great way to ensure a continued alignment to these frameworks.Oracle offers a number of different security assessment covering a broad range of technologies. Some of these are short engagements conducted for free with our strategic customers and partners. Others are longer term paid engagements delivered by Oracle Consulting Services or one of our partners. The goal of a security assessment, (also known as a security audit or security review), is to ensure that necessary security controls are integrated into the design and implementation of a project, application or technology.  A properly completed security assessment should provide documentation outlining any security gaps that exist in an infrastructure and the associated risks for those gaps. With that knowledge, an organization can choose to either mitigate, transfer, avoid or accept the risk. One example of an Oracle offering is a Security Readiness Assessment:The Oracle Security Readiness Assessment is a practical security architecture review focused on aligning an organization’s enterprise security architecture to their business principals and strategic objectives. The service will establish a multi-phase security architecture roadmap focused on supporting new and existing business initiatives.Offering OverviewThe Security Readiness Assessment will: Define an organization’s current security posture and provide a roadmap to a desired future state architecture by mapping  security solutions to business goals Incorporate commonly accepted security architecture concepts to streamline an organization’s security vision from strategy to implementation Define the people, process and technology implications of the desired future state architecture The objective is to deliver cohesive, best practice security architectures spanning multiple domains that are unique and specific to the context of your organization. Offering DetailsThe Oracle Security Readiness Assessment is a multi-stage process with a dedicated Oracle Security team supporting your organization.  During the course of this free engagement, the team will focus on the following: Review your current business operating model and supporting IT security structures and processes Partner with your organization to establish a future state security architecture leveraging Oracle’s reference architectures, capability maps, and best practices Provide guidance and recommendations on governance practices for the rollout and adoption of your future state security architecture Create an initial business case for the adoption of the future state security architecture If you are interested in finding out more, ask your Sales Consultant or Account Manager for details.

    Read the article

  • SIM to OIM Migration: A How-to Guide to Avoid Costly Mistakes (SDG Corporation)

    - by Darin Pendergraft
    In the fall of 2012, Oracle launched a major upgrade to its IDM portfolio: the 11gR2 release.  11gR2 had four major focus areas: More simplified and customizable user experience Support for cloud, mobile, and social applications Extreme scalability Clear upgrade path For SUN migration customers, it is critical to develop and execute a clearly defined plan prior to beginning this process.  The plan should include initiation and discovery, assessment and analysis, future state architecture, review and collaboration, and gap analysis.  To help better understand your upgrade choices, SDG, an Oracle partner has developed a series of three whitepapers focused on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration. In the second of this series on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration, Santosh Kumar Singh from SDG  discusses the proper steps that should be taken during the planning-to-post implementation phases to ensure a smooth transition from SIM to OIM. Read the whitepaper for Part 2: Download Part 2 from SDGC.com In the last of this series of white papers, Santosh will talk about Identity and Access Management best practices and how these need to be considered when going through with an OIM migration. If you have not taken the opportunity, please read the first in this series which discusses the Migration Approach, Methodology, and Tools for you to consider when planning a migration from SIM to OIM. Read the white paper for part 1: Download Part 1 from SDGC.com About the Author: Santosh Kumar Singh Identity and Access Management (IAM) Practice Leader Santosh, in his capacity as SDG Identity and Access Management (IAM) Practice Leader, has direct senior management responsibility for the firm's strategy, planning, competency building, and engagement deliverance for this Practice. He brings over 12+ years of extensive IT, business, and project management and delivery experience, primarily within enterprise directory, single sign-on (SSO) application, and federated identity services, provisioning solutions, role and password management, and security audit and enterprise blueprint. Santosh possesses strong architecture and implementation expertise in all areas within these technologies and has repeatedly lead teams in successfully deploying complex technical solutions. About SDG: SDG Corporation empowers forward thinking companies to strategize their future, realize their vision, and minimize their IT risk. SDG distinguishes itself by offering flexible business models to fit their clients’ needs; faster time-to-market with its pre-built solutions and frameworks; a broad-based foundation of domain experts, and deep program management expertise. (www.sdgc.com)

    Read the article

  • The curious case of SOA Human tasks' automatic completion

    - by Kavitha Srinivasan
    A large south-Asian insurance industry customer using Oracle BPM and SOA ran into this. I have survived this ordeal previously myself but didnt think to blog it then. However, it seems like a good idea to share this knowledge with this reader community and so here goes.. Symptom: A human task (in a SOA/BPEL/BPM process) completes automatically while it should have been assigned to a proper user.There are no stack traces, no related exceptions in the logs. Why: The product is designed to treat human tasks that don't have assignees as one that is eligible for completion. And hence no warning/error messages are recorded in the logs. Usecase variant: A variant of this usecase, where an assignee doesnt exist in the repository is treated as a recoverable error. One can find this in the 'pending recovery' instances in EM and reactivate the task by changing the assignees in the bpm workspace as a process owner /administrator. But back to the usecase when tasks get completed automatically... When: This happens when the users/groups assigned to a task are 'empty' or null. This has been seen only on tasks whose assignees are derived from an assignment expression - ie at runtime an XPath is used to determine who to assign the task to. (This should not happen if task assignees are populated via swim-lane roles.) How to detect this in EM For instances that are auto-completed thus, one will notice in the Audit Trail of such instances, that the 'outcome' of the task is empty. The 'acquired by' element will also show as empty/null. Enabling the oracle.soa.services.workflow.* logger in em should print more verbose messages about this. How to fix this The application code needs two fixes: input to HT: The XSLT/XPath used  to set the task 'assignee' and the process itself should be enhanced to handle nulls better. For eg: if no-data-found, set assignees to alternate value, force default assignees etc. output from HT: Additionally, in the application code, check that the 'outcome' of the HT is not-null. If null, route the task to be performed again after setting the assignee correctly. Beginning PS4FP, one should be able to use 'grab' to route back to the task to fire again. Hope this helps. 

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • Odd Profiler Results with EF4

    - by AjarnMark
    I have been doing some testing of using the Microsoft Entity Framework 4 with stored procedures and ran across some really odd results in SQL Server Profiler. The application that is running which uses Entity Framework 4 is a simple Web Application written in C#, and the Entity Data Model is actually contained in a referenced class library of its own.  I’ll write more about my experiences with this later.  For now the question is, why does SQL Profiler think that the stored procedure is running in Master, and not in my application database? While analyzing the effects of using custom helper methods on my EDM classes to call the stored procedure, I decided to run Profiler while I stepped through the code so that I had a clear understanding of exactly when and what calls were made to the SQL Server.  I ran Profiler switching back and forth between the TSQL and TSQL_SP templates.  However, to reduce the amount of results rows I needed to wade through, I set a filter on DatabaseID to be equal to my application’s database.  Each time I ran this, the only thing that I saw was an Audit:Login to the database, but no procedure or T-SQL statements executed, yet I was definitely getting results back to my web page.  I tried other Profiler templates, still filtering on DatabaseID (tangent: I found, at least back in SQL 2000 Profiler, that filtering on DatabaseID was more reliable than filtering on DatabaseName.  Even though I’m now running SQL 2008, that habit sticks with me).  Still no results other than the Login.  Very weird! Finally, I decided to run Profiler with no filtering and discovered that that lines which represent my stored procedure and its T-SQL commands are all marked with DatabaseID = 1, which is Master.  Why in the world would that be?  My procedure is definitely in the application database, and not in Master, and there is nothing funny about the call to the procedure evident in Profiler (i.e. it is not called as MyAppDB.dbo.MyProcName, but rather just dbo.MyProcName).  There must be something funny with the way the Entity Framework is wrapping this call, and I don’t like it…I don’t like it one bit.  My primary PROD server contains 40+ databases on it, and when I need to profile something, I expect to be able to filter based on DatabaseID (for the record, I displayed DatabaseName in my results, too, and it also shows Master). I find the same pattern of everything except the Login showing up as being in Master when I run my version that uses standard LINQ to Entities instead of stored procedures, so that suggests it is not my code, but rather something funny with SQL Server 2008 Profiler or the Entity Framework. If you have any ideas about why this might be so, please comment below.

    Read the article

  • 2 Days to Go before MySQL Connect - Focus on Hands-On Labs

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Oracle MySQL team is very eager to meet all MySQL community members, users, customers and partners gathering this weekend in San Francisco for MySQL Connect! Eight different Hands-On Labs will give you the opportunity to get hands-on experience on the following topics. All taking place in Plaza Room A. Saturday: 11.30 amDeveloping Applications with MySQL and Java—Mark Matthews, Oracle 1.00 pm (2.5 hours long)Getting Started with MySQL—Gillian Gunson and Alfredo Kojima, Oracle 4.00 pmGetting Started with MySQL Cluster—Santo Leto, Oracle 5.30 pmImproving Performance with the MySQL Performance Schema—Jesper Krogh, Oracle Sunday: 10.15 am (2.5 hours long) Focus on MySQL Replication—Sven Sandberg and Luis Soares, Oracle 1.15 pm MySQL Utilities—Charles Bell, Oracle 2.45 pm Performance Tuning with MySQL Enterprise Monitor—Mark Matthews, Oracle 4.15 pm MySQL Security: Authentication and Audit—Jonathon Coombes, Oracle Not registered yet? You can still save US$ 300 off the on-site fee! Attending Oracle openWorld or JavaOne? Add MySQL Connect to your registration for only US$100! Register Now!

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • Taking the fear out of a Cloud initiative through the use of security tools

    - by user736511
    Typical employees, constituents, and business owners  interact with online services at a level where their knowledge of back-end systems is low, and most of the times, there is no interest in knowing the systems' architecture.  Most application administrators, while partially responsible for these systems' upkeep, have very low interactions with them, at least at an operational, platform level.  Of greatest interest to these groups is the consistent, reliable, and manageable operation of the interfaces with which they communicate.  Introducing the "Cloud" topic in any evolving architecture automatically raises the concerns for data and identity security simply because of the perception that when owning the silicon, enterprises are not able to manage its content.  But is this really true?   In the majority of traditional architectures, data and applications that access it are physically distant from the organization that owns it.  It may reside in a shared data center, or a geographically convenient location that spans large organizations' connectivity capabilities.  In the end, very often, the model of a "traditional" architecture is fairly close to the "new" Cloud architecture.  Most notable difference is that by nature, a Cloud setup uses security as a core function, and not as a necessary add-on. Therefore, following best practices, one can say that data can be safer in the Cloud than in traditional, stove-piped environments where data access is segmented and difficult to audit. The caveat is, of course, what "best practices" consist of, and here is where Oracle's security tools are perfectly suited for the task.  Since Oracle's model is to support very large organizations, it is fundamentally concerned about distributed applications, databases etc and their security, and the related Identity Management Products, or DB Security options reflect that concept.  In the end, consumers of applications and their data are to be served more safely in a controlled Cloud environment, while realizing the many cost savings associated with it. Having very fast resources to serve them (such as the Exa* platform) makes the concept even more attractive.  Finally, if a Cloud strategy does not seem feasible, consider the pros and cons of a traditional vs. a Cloud architecture.  Using the exact same criteria and business goals/traditions, and with Oracle's technology, you might be hard pressed to justify maintaining the technical status quo on security alone. For additional information please visit Oracle's Cloud Security page at: http://www.oracle.com/us/technologies/cloud/cloud-security-428855.html

    Read the article

  • The Oracle Cash Management Secret Very Few Customers Know About

    - by Theresa Hickman
    Did you know that Oracle Cash Management has a robust positioning feature? I had no idea. I was under the mistaken impression that Oracle Cash Management only did bank statement reconciliations. It seems I am not alone. In fact, many Oracle Financials customers are also not aware of this even though it is delivered for free with the Oracle Financials license. Even better, last week, Oracle released an enhancement to Oracle Cash Management for Release 12 that will greatly help customers with their cash positioning needs. As we all know, credit is tight these days. Companies need better visibility of their cash and other liquidity positions to make better use of their cash resources. Today, many customers are managing their cash positions manually using spreadsheets. We also hear how many of them are maintaining larger than normal balances in numerous bank accounts because they just do not have the visibility, and therefore the comfort they need. Although spreadsheets may work in the short-term, they are not the best way to manage your cash positions for the long-term especially if you have dozens, or even hundreds of bank and brokerage accounts. Also, spreadsheets are a lot more risky because they can be overwritten, deleted, difficult to audit, etc. With the newly enhanced positioning feature in Oracle Cash Management, customers can manage their daily cash positions using an excel-like interface that is very flexible and user-configurable. You can link the worksheet to an unlimited number of bank accounts to automatically retrieve your opening balances, the current/intra-day cash inflows and outflows, as well as your expected cash flows from your Fx, Investment and Debt positions if you have Oracle's Treasury module . Oracle Cash Management also has direct integration with Oracle Receivables, Oracle Payables, and Payroll, which adds to the comprehensive picture of what's happening with your organizations' cash in real-time. Here's a screen shot of what the cash positioning page looks like: View image As you can see, your Treasurers can obtain a holistic view of all cash positions across any number of bank accounts as well as other sources of cash flow movements. Depending on how they manage their accounts, they can also use this feature to initiate or monitor bank account sweeps or transfers between their zero balance accounts (ZBA) or cash pools. The cash position worksheet provide drill down for more detail and the ability to manually enter items directly into the worksheet for even greater flexibility and control. The enhancements to this feature were released last week. The following list the patches for Release 12.0.6 and 12.1.1: For more information, visit the following website. http://launch.oracle.com. PIN: yes2try

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >