Search Results

Search found 29193 results on 1168 pages for 'sql merge'.

Page 711/1168 | < Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >

  • Resolving data redundancy up front

    - by okeofs
    Introduction As all of us do when confronted with a problem, the resource of choice is to ‘Google it’. This is where the plot thickens. Recently I was asked to stage data from numerous databases which were to be loaded into a data warehouse. To make a long story short, I was looking for a manner in which to obtain the table names from each database, to ascertain potential overlap.   As the source data comes from a SQL database created from dumps of a third party product,  one could say that there were +/- 95 tables for each database.   Yes I know that first instinct is to use the system stored procedure “exec sp_msforeachdb 'select "?" AS db, * from [?].sys.tables'”. However, if one stops to think about this, it would be nice to have all the results in a temporary or disc based  table; which in itself , implies additional labour. This said,  I decided to ‘re-invent’ the wheel. The full code sample may be found at the bottom of this article.   Define a few temporary tables and variables   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ -- A temp table to hold the names of my databases CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )     --A temp table with the same database names as above, HOWEVER using an --Identity number (recNO) as a loop variable. --You will note below that I loop through until I reach 25 (see below) as at --that point the system databases, the reporting server database etc begin. --1- 24 are user databases. These are really what I was looking for. --Whilst NOT the best solution,it works and the code was meant as a quick --and dirty. CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )   --My output table showing the database name and table name CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   Insert the database names into a temporary table I pull the database names using the system stored procedure sp_databases   INSERT INTO #rawdata1 EXEC sp_databases Go   Insert the results from #rawdata1 into a table containing a record number  #rawdata11 so that I can LOOP through the extract   INSERT into #rawdata11 select * from  #rawdata1   We now declare 3 more variables:  @kounter is used to keep track of our position within the loop. @databasename is used to keep track of the’ current ‘ database name being used in the current pass of the loop;  as inorder to obtain the tables for that database we  need to issue a ‘USE’ statement, an insert command and other related code parts. This is the challenging part. @sql is a varchar(max) variable used to contain the ‘USE’ statement PLUS the’ insert ‘ code statements. We now initalize @kounter to 1 .   declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1   The Loop The astute reader will remember that the temporary table #rawdata11 contains our  database names  and each ‘database row’ has a record number (recNo). I am only interested in record numbers under 25. I now set the value of the temporary variable @DatabaseName (see below) .Note that I used the row number as a part of the predicate. Now, knowing the database name, I can create dynamic T-SQL to be executed using the sp_sqlexec stored procedure (see the code in red below). Finally, after all the tables for that given database have been placed in temporary table ##rawdata3, I increment the counter and continue on. Note that I used a global temporary table to ensure that the result set persists after the termination of the run. At some stage, I plan to redo this part of the code, as global temporary tables are not really an ideal solution.    WHILE (@kounter < 25)  BEGIN  select @DatabaseName = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END   The full code extract   Here is the full code sample.   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   INSERT INTO #rawdata1 EXEC sp_databases go INSERT into #rawdata11 select * from  #rawdata1 declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1 WHILE (@kounter < 25)  BEGIN  select @databasename = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END    select * from ##rawdata3  where table_name like '%SalesOrderHeader%'

    Read the article

  • How to generate DELETE statements in PL/SQL, based on the tables FK relations?

    - by The chicken in the kitchen
    Is it possible via script/tool to generate authomatically many delete statements based on the tables fk relations, using Oracle PL/SQL? In example: I have the table: CHICKEN (CHICKEN_CODE NUMBER) and there are 30 tables with fk references to its CHICKEN_CODE that I need to delete; there are also other 150 tables foreign-key-linked to that 30 tables that I need to delete first. Is there some tool/script PL/SQL that I can run in order to generate all the necessary delete statements based on the FK relations for me? (by the way, I know about cascade delete on the relations, but please pay attention: I CAN'T USE IT IN MY PRODUCTION DATABASE, because it's dangerous!) I'm using Oracle DataBase 10G R2. This is the result I've written, but it is not recursive: This is a view I have previously written, but of course it is not recursive! CREATE OR REPLACE FORCE VIEW RUN ( OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, VINCOLO ) AS SELECT OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, '(' || LTRIM ( EXTRACT (XMLAGG (XMLELEMENT ("x", ',' || COLUMN_NAME)), '/x/text()'), ',') || ')' VINCOLO FROM ( SELECT CON1.OWNER OWNER_1, CON1.TABLE_NAME TABLE_NAME_1, CON1.CONSTRAINT_NAME CONSTRAINT_NAME_1, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME FROM DBA_CONSTRAINTS CON, DBA_CONS_COLUMNS COL, DBA_CONSTRAINTS CON1 WHERE CON.OWNER = 'TABLE_OWNER' AND CON.TABLE_NAME = 'TABLE_OWNED' AND ( (CON.CONSTRAINT_TYPE = 'P') OR (CON.CONSTRAINT_TYPE = 'U')) AND COL.TABLE_NAME = CON1.TABLE_NAME AND COL.CONSTRAINT_NAME = CON1.CONSTRAINT_NAME --AND CON1.OWNER = CON.OWNER AND CON1.R_CONSTRAINT_NAME = CON.CONSTRAINT_NAME AND CON1.CONSTRAINT_TYPE = 'R' GROUP BY CON1.OWNER, CON1.TABLE_NAME, CON1.CONSTRAINT_NAME, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME) GROUP BY OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME; ... and it contains the error of using DBA_CONSTRAINTS instead of ALL_CONSTRAINTS... Please pay attention to this: http://stackoverflow.com/questions/485581/generate-delete-statement-from-foreign-key-relationships-in-sql-2008/2677145#2677145 Another user has just written it in SQL SERVER 2008, anyone is able to convert to Oracle 10G PL/SQL? I am not able to... :-( This is the code written by another user in SQL SERVER 2008: DECLARE @COLUMN_NAME AS sysname DECLARE @TABLE_NAME AS sysname DECLARE @IDValue AS int SET @COLUMN_NAME = '<Your COLUMN_NAME here>' SET @TABLE_NAME = '<Your TABLE_NAME here>' SET @IDValue = 123456789 DECLARE @sql AS varchar(max) ; WITH RELATED_COLUMNS AS ( SELECT QUOTENAME(c.TABLE_SCHEMA) + '.' + QUOTENAME(c.TABLE_NAME) AS [OBJECT_NAME] ,c.COLUMN_NAME FROM PBANKDW.INFORMATION_SCHEMA.COLUMNS AS c WITH (NOLOCK) INNER JOIN PBANKDW.INFORMATION_SCHEMA.TABLES AS t WITH (NOLOCK) ON c.TABLE_CATALOG = t.TABLE_CATALOG AND c.TABLE_SCHEMA = t.TABLE_SCHEMA AND c.TABLE_NAME = t.TABLE_NAME AND t.TABLE_TYPE = 'BASE TABLE' INNER JOIN ( SELECT rc.CONSTRAINT_CATALOG ,rc.CONSTRAINT_SCHEMA ,lkc.TABLE_NAME ,lkc.COLUMN_NAME FROM PBANKDW.INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS rc WITH (NOLOCK) INNER JOIN PBANKDW.INFORMATION_SCHEMA.KEY_COLUMN_USAGE lkc WITH (NOLOCK) ON lkc.CONSTRAINT_CATALOG = rc.CONSTRAINT_CATALOG AND lkc.CONSTRAINT_SCHEMA = rc.CONSTRAINT_SCHEMA AND lkc.CONSTRAINT_NAME = rc.CONSTRAINT_NAME INNER JOIN PBANKDW.INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc WITH (NOLOCK) ON rc.CONSTRAINT_CATALOG = tc.CONSTRAINT_CATALOG AND rc.CONSTRAINT_SCHEMA = tc.CONSTRAINT_SCHEMA AND rc.UNIQUE_CONSTRAINT_NAME = tc.CONSTRAINT_NAME INNER JOIN PBANKDW.INFORMATION_SCHEMA.KEY_COLUMN_USAGE rkc WITH (NOLOCK) ON rkc.CONSTRAINT_CATALOG = tc.CONSTRAINT_CATALOG AND rkc.CONSTRAINT_SCHEMA = tc.CONSTRAINT_SCHEMA AND rkc.CONSTRAINT_NAME = tc.CONSTRAINT_NAME WHERE rkc.COLUMN_NAME = @COLUMN_NAME AND rkc.TABLE_NAME = @TABLE_NAME ) AS j ON j.CONSTRAINT_CATALOG = c.TABLE_CATALOG AND j.CONSTRAINT_SCHEMA = c.TABLE_SCHEMA AND j.TABLE_NAME = c.TABLE_NAME AND j.COLUMN_NAME = c.COLUMN_NAME ) SELECT @sql = COALESCE(@sql, '') + 'DELETE FROM ' + [OBJECT_NAME] + ' WHERE ' + [COLUMN_NAME] + ' = ' + CONVERT(varchar, @IDValue) + CHAR(13) + CHAR(10) FROM RELATED_COLUMNS PRINT @sql Thank to Charles, this is the latest not working release of the software, I have added a parameter with the OWNER because the referential integrities propagate through about 5 other Oracle users (!!!): CREATE OR REPLACE PROCEDURE delete_cascade ( parent_table VARCHAR2, parent_table_owner VARCHAR2) IS cons_name VARCHAR2 (30); tab_name VARCHAR2 (30); tab_name_owner VARCHAR2 (30); parent_cons VARCHAR2 (30); parent_col VARCHAR2 (30); delete1 VARCHAR (500); delete2 VARCHAR (500); delete_command VARCHAR (4000); CURSOR cons_cursor IS SELECT constraint_name, r_constraint_name, table_name, constraint_type FROM all_constraints WHERE constraint_type = 'R' AND r_constraint_name IN (SELECT constraint_name FROM all_constraints WHERE constraint_type IN ('P', 'U') AND table_name = parent_table AND owner = parent_table_owner) AND delete_rule = 'NO ACTION'; CURSOR tabs_cursor IS SELECT DISTINCT table_name FROM all_cons_columns WHERE constraint_name = cons_name; CURSOR child_cols_cursor IS SELECT column_name, position FROM all_cons_columns WHERE constraint_name = cons_name AND table_name = tab_name; BEGIN FOR cons IN cons_cursor LOOP cons_name := cons.constraint_name; parent_cons := cons.r_constraint_name; SELECT DISTINCT table_name, owner INTO tab_name, tab_name_owner FROM all_cons_columns WHERE constraint_name = cons_name; delete_cascade (tab_name, tab_name_owner); delete_command := ''; delete1 := ''; delete2 := ''; FOR col IN child_cols_cursor LOOP SELECT DISTINCT column_name INTO parent_col FROM all_cons_columns WHERE constraint_name = parent_cons AND position = col.position; IF delete1 IS NULL THEN delete1 := col.column_name; ELSE delete1 := delete1 || ', ' || col.column_name; END IF; IF delete2 IS NULL THEN delete2 := parent_col; ELSE delete2 := delete2 || ', ' || parent_col; END IF; END LOOP; delete_command := 'delete from ' || tab_name_owner || '.' || tab_name || ' where (' || delete1 || ') in (select ' || delete2 || ' from ' || parent_table_owner || '.' || parent_table || ');'; INSERT INTO ris VALUES (SEQUENCE_COMANDI.NEXTVAL, delete_command); COMMIT; END LOOP; END; / In the cursor CONS_CURSOR I have added the condition: AND delete_rule = 'NO ACTION'; in order to avoid deletion in case of referential integrities with DELETE_RULE = 'CASCADE' or DELETE_RULE = 'SET NULL'. Now I have tried to turn from stored procedure to stored function, but the delete statements are not correct: CREATE OR REPLACE FUNCTION deletecascade ( parent_table VARCHAR2, parent_table_owner VARCHAR2) RETURN VARCHAR2 IS cons_name VARCHAR2 (30); tab_name VARCHAR2 (30); tab_name_owner VARCHAR2 (30); parent_cons VARCHAR2 (30); parent_col VARCHAR2 (30); delete1 VARCHAR (500); delete2 VARCHAR (500); delete_command VARCHAR (4000); AT_LEAST_ONE_ITERATION NUMBER DEFAULT 0; CURSOR cons_cursor IS SELECT constraint_name, r_constraint_name, table_name, constraint_type FROM all_constraints WHERE constraint_type = 'R' AND r_constraint_name IN (SELECT constraint_name FROM all_constraints WHERE constraint_type IN ('P', 'U') AND table_name = parent_table AND owner = parent_table_owner) AND delete_rule = 'NO ACTION'; CURSOR tabs_cursor IS SELECT DISTINCT table_name FROM all_cons_columns WHERE constraint_name = cons_name; CURSOR child_cols_cursor IS SELECT column_name, position FROM all_cons_columns WHERE constraint_name = cons_name AND table_name = tab_name; BEGIN FOR cons IN cons_cursor LOOP AT_LEAST_ONE_ITERATION := 1; cons_name := cons.constraint_name; parent_cons := cons.r_constraint_name; SELECT DISTINCT table_name, owner INTO tab_name, tab_name_owner FROM all_cons_columns WHERE constraint_name = cons_name; delete1 := ''; delete2 := ''; FOR col IN child_cols_cursor LOOP SELECT DISTINCT column_name INTO parent_col FROM all_cons_columns WHERE constraint_name = parent_cons AND position = col.position; IF delete1 IS NULL THEN delete1 := col.column_name; ELSE delete1 := delete1 || ', ' || col.column_name; END IF; IF delete2 IS NULL THEN delete2 := parent_col; ELSE delete2 := delete2 || ', ' || parent_col; END IF; END LOOP; delete_command := 'delete from ' || tab_name_owner || '.' || tab_name || ' where (' || delete1 || ') in (select ' || delete2 || ' from ' || parent_table_owner || '.' || parent_table || ');' || deletecascade (tab_name, tab_name_owner); INSERT INTO ris VALUES (SEQUENCE_COMANDI.NEXTVAL, delete_command); COMMIT; END LOOP; IF AT_LEAST_ONE_ITERATION = 1 THEN RETURN ' where COD_CHICKEN = V_CHICKEN AND COD_NATION = V_NATION;'; ELSE RETURN NULL; END IF; END; / Please assume that V_CHICKEN and V_NATION are the criteria to select the CHICKEN to delete from the root table: the condition is: "where COD_CHICKEN = V_CHICKEN AND COD_NATION = V_NATION" on the root table.

    Read the article

  • Merge functionality of two xsl files into a single file (continued.....)

    - by anuamb
    This is in continuation of my question: Merge functionality of two xsl files into a single file (not a xsl import or include issue) I have to merge the solution (xsl) of above question to below xsl: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:fo="http://www.w3.org/1999/XSL/Format"> <xsl:template match="/"> <Declaration> <Message> <Meduim> <xsl:value-of select="/Declaration/Message/Meduim"/> </Meduim> <MessageIdentifier> <xsl:value-of select="/Declaration/Message/MessageIdentifier"/> </MessageIdentifier> <ControlingAgencyCode> <xsl:value-of select="/Declaration/Message/ControlingAgencyCode"/> </ControlingAgencyCode> <AssociationAssignedCode> <xsl:value-of select="/Declaration/Message/AssociationAssignedCode"/> </AssociationAssignedCode> <CommonAccessReference> <xsl:value-of select="/Declaration/Message/CommonAccessReference"/> </CommonAccessReference> </Message> <BeginingOfMessage> <MessageCode> <xsl:value-of select="/Declaration/BeginingOfMessage/MessageCode"/> </MessageCode> <DeclarationCurrency> <xsl:value-of select="/Declaration/BeginingOfMessage/DeclarationCurrency"/> </DeclarationCurrency> <MessageFunction> <xsl:value-of select="/Declaration/BeginingOfMessage/MessageFunction"/> </MessageFunction> </BeginingOfMessage> <Header> <ProcessingInformation> <xsl:for-each select="/Declaration/Header/ProcessingInformation/ProcessingInstructions"> <ProcessingInstructions> <xsl:value-of select="."/> </ProcessingInstructions> </xsl:for-each> </ProcessingInformation> <xsl:for-each select="/Declaration/Header/Seal"> <Seal> <SealID> <xsl:value-of select="SealID"/> </SealID> <SealLanguage> <xsl:value-of select="SealLanguage"/> </SealLanguage> </Seal> </xsl:for-each> <xsl:choose> <xsl:when test='/Declaration/Header/DeclarantsReference = ""'> <DeclarantsReference> <xsl:text disable-output-escaping="no">A</xsl:text> </DeclarantsReference> </xsl:when> <xsl:otherwise> <DeclarantsReference> <xsl:value-of select="/Declaration/Header/DeclarantsReference"/> </DeclarantsReference> </xsl:otherwise> </xsl:choose> <xsl:for-each select="/Declaration/Header/Items"> <Items> <CustomsStatusOfGoods> <CPC> <xsl:value-of select="CustomsStatusOfGoods/CPC"/> </CPC> <CommodityCode> <xsl:value-of select="CustomsStatusOfGoods/CommodityCode"/> </CommodityCode> <ECSuplementaryMeasureCode1> <xsl:value-of select="CustomsStatusOfGoods/ECSuplementaryMeasureCode1"/> </ECSuplementaryMeasureCode1> <ECSuplementaryMeasureCode2> <xsl:value-of select="CustomsStatusOfGoods/ECSuplementaryMeasureCode2"/> </ECSuplementaryMeasureCode2> <PreferenceCode> <xsl:value-of select="CustomsStatusOfGoods/PreferenceCode"/> </PreferenceCode> </CustomsStatusOfGoods> <xsl:for-each select="ItemAI"> <ItemAI> <AICode> <xsl:value-of select="AICode"/> </AICode> <AIStatement> <xsl:value-of select="AIStatement"/> </AIStatement> <AILanguage> <xsl:value-of select="AILanguage"/> </AILanguage> </ItemAI> </xsl:for-each> <Locations> <CountryOfOriginCode> <xsl:value-of select="Locations/CountryOfOriginCode"/> </CountryOfOriginCode> <xsl:for-each select="Locations/ItemCountryonRouteCode"> <ItemCountryonRouteCode> <xsl:value-of select="."/> </ItemCountryonRouteCode> </xsl:for-each> <ItemDispatchCountry> <xsl:value-of select="Locations/ItemDispatchCountry"/> </ItemDispatchCountry> <ItemDestinationCountry> <xsl:value-of select="Locations/ItemDestinationCountry"/> </ItemDestinationCountry> </Locations> <Measurements> <GrossMass> <xsl:value-of select="Measurements/GrossMass"/> </GrossMass> <NetMass> <xsl:value-of select="Measurements/NetMass"/> </NetMass> <SupplementaryUnits> <xsl:value-of select="Measurements/SupplementaryUnits"/> </SupplementaryUnits> <ThirdQuantity> <xsl:value-of select="Measurements/ThirdQuantity"/> </ThirdQuantity> </Measurements> <xsl:for-each select="Package"> <Package> <PackageNumber> <xsl:value-of select="PackageNumber"/> </PackageNumber> <PackageKind> <xsl:value-of select="PackageKind"/> </PackageKind> <PackageMarks> <xsl:value-of select="PackageMarks"/> </PackageMarks> <PackageLanguage> <xsl:value-of select="PackageLanguage"/> </PackageLanguage> </Package> </xsl:for-each> <PriceValue> <ItemStatisticalValue> <xsl:value-of select="PriceValue/ItemStatisticalValue"/> </ItemStatisticalValue> <ItemPrice> <xsl:value-of select="PriceValue/ItemPrice"/> </ItemPrice> </PriceValue> <ItemReferences> <xsl:for-each select="ItemReferences/ContainerID"> <ContainerID> <xsl:value-of select="."/> </ContainerID> </xsl:for-each> <QuotaNo> <xsl:value-of select="ItemReferences/QuotaNo"/> </QuotaNo> <UNDangerousGoodsCode> <xsl:value-of select="ItemReferences/UNDangerousGoodsCode"/> </UNDangerousGoodsCode> </ItemReferences> <GoodsDescription> <GoodsDescription> <xsl:value-of select="GoodsDescription/GoodsDescription"/> </GoodsDescription> <GoodsDescriptionLanguage> <xsl:value-of select="GoodsDescription/GoodsDescriptionLanguage"/> </GoodsDescriptionLanguage> </GoodsDescription> <Documents> <xsl:for-each select="Documents/PreviousDocument"> <PreviousDocument> <PreviousDocumentKind> <xsl:value-of select="PreviousDocumentKind"/> </PreviousDocumentKind> <PreviousDocumentIdentifier> <xsl:value-of select="PreviousDocumentIdentifier"/> </PreviousDocumentIdentifier> <PreviousDocumentType> <xsl:value-of select="PreviousDocumentType"/> </PreviousDocumentType> <PreviousDocumentLanguage> <xsl:value-of select="PreviousDocumentLanguage"/> </PreviousDocumentLanguage> </PreviousDocument> </xsl:for-each> <xsl:for-each select="Documents/ItemDocument"> <ItemDocument> <DocumentCode> <xsl:value-of select="DocumentCode"/> </DocumentCode> <DocumentPart> <xsl:value-of select="DocumentPart"/> </DocumentPart> <DocumentQuantity> <xsl:value-of select="DocumentQuantity"/> </DocumentQuantity> <DocumentReason> <xsl:value-of select="DocumentReason"/> </DocumentReason> <DocumentReference> <xsl:value-of select="DocumentReference"/> </DocumentReference> <DocumentStatus> <xsl:value-of select="DocumentStatus"/> </DocumentStatus> <DocumentLanguage> <xsl:value-of select="DocumentLanguage"/> </DocumentLanguage> </ItemDocument> </xsl:for-each> </Documents> <Valuation> <ValuationMethodCode> <xsl:value-of select="Valuation/ValuationMethodCode"/> </ValuationMethodCode> <ItemValuationAdjustmentCode> <xsl:value-of select="Valuation/ItemValuationAdjustmentCode"/> </ItemValuationAdjustmentCode> <ItemValuationAdjustmentPercentage> <xsl:value-of select="Valuation/ItemValuationAdjustmentPercentage"/> </ItemValuationAdjustmentPercentage> </Valuation> <ItemTransportChargeMOP> <xsl:value-of select="ItemTransportChargeMOP"/> </ItemTransportChargeMOP> <xsl:for-each select="ItemProcessingInstructions"> <ItemProcessingInstructions> <xsl:value-of select="."/> </ItemProcessingInstructions> </xsl:for-each> </Items> </xsl:for-each> <NumberOfPackages> <xsl:value-of select="/Declaration/Header/NumberOfPackages"/> </NumberOfPackages> </Header> </Declaration> </xsl:template> </xsl:stylesheet> so for source xml <Declaration> <Message> <Meduim>#+#</Meduim> <MessageIdentifier>AA</MessageIdentifier> <CommonAccessReference></CommonAccessReference> </Message> <BeginingOfMessage> <MessageCode>ISD</MessageCode> <DeclarationCurrency></DeclarationCurrency> <MessageFunction>5</MessageFunction> </BeginingOfMessage> </Declaration> the final output is <Declaration> <Message> <Meduim></Meduim> <MessageIdentifier>AA</MessageIdentifier> </Message> <BeginingOfMessage> <MessageCode>ISD</MessageCode> <MessageFunction>5</MessageFunction> </BeginingOfMessage> </Declaration>

    Read the article

  • Resolving "PLS-00201: identifier 'DBMS_SYSTEM.XXXX' must be declared" Error

    - by Giri Mandalika
    Here is a failure sample. SQL set serveroutput on SQL alter package APPS.FND_TRACE compile body; Warning: Package Body altered with compilation errors. SQL show errors Errors for PACKAGE BODY APPS.FND_TRACE: LINE/COL ERROR -------- ----------------------------------------------------------------- 235/6 PL/SQL: Statement ignored 235/6 PLS-00201: identifier 'DBMS_SYSTEM.SET_EV' must be declared .. By default, DBMS_SYSTEM package is accessible only from SYS schema. Also there is no public synonym created for this package. So, the solution is to create the public synonym and grant "execute" privilege on DBMS_SYSTEM package to all database users or a specific user. eg., SQL CREATE PUBLIC SYNONYM dbms_system FOR dbms_system; Synonym created. SQL GRANT EXECUTE ON dbms_system TO APPS; Grant succeeded. - OR - SQL GRANT EXECUTE ON dbms_system TO PUBLIC; Grant succeeded. SQL alter package APPS.FND_TRACE compile body; Package body altered. Note that merely granting execute privilege is not enough -- creating the public synonym is as important to resolve this issue.

    Read the article

  • Inside Red Gate - Introduction

    - by Simon Cooper
    I work for Red Gate Software, a software company based in Cambridge, UK. In this series of posts, I'll be discussing how we develop software at Red Gate, and what we get up to, all from a dev's perspective. Before I start the series proper, in this post I'll give you a brief background to what I have done and continue to do as part of my job. The initial few posts will be giving an overview of how the development sections of the company work. There is much more to a software company than writing the products, but as I'm a developer my experience is biased towards that, and so that is what this series will concentrate on. My background Red Gate was founded in 1999 by Neil Davidson & Simon Galbraith, who continue to be joint CEOs. I joined in September 2007, and immediately set to work writing a new Check for Updates client and server (CfU), as part of a team of 2. That was finished at the end of 2007. I then joined the SQL Compare team. The first large project I worked on was updating SQL Compare for SQL Server 2008, resulting in SQL Compare 7, followed by a UI redesign in SQL Compare 8. By the end of this project in early 2009 I had become the 'go-to' guy for the SQL Compare Engine (I'll explain what that means in a later post), which is used by most of the other tools in the SQL Tools division in one way or another. After that, we decided to expand into Oracle, and I wrote the prototype for what became the engine of Schema Compare for Oracle (SCO). In the latter half of 2009 a full project was started, resulting in the release of SCO v1 in early 2010. Near the end of 2010 I moved to the .NET division, where I joined the team working on SmartAssembly. That's what I continue to work on today. The posts in this series will cover my experience in software development at Red Gate, within the SQL Tools and .NET divisions. Hopefully, you'll find this series an interesting look at what exactly goes into producing the software at Red Gate.

    Read the article

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal... 4 days left

    - by Testas
    Only 4 days left to win a chance to see Paul and Kimberly's  Send a email to [email protected] with Master class in the subject line for an opportunity to win a free ticket to this event   and if you do not win.....  You can also register for the seminar yourself at: www.regonline.co.uk/kimtrippsql  More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010

    Read the article

  • 24 Hours of PASS scheduling

    - by Rob Farley
    I have a new appreciation for Tom LaRock (@sqlrockstar), who is doing a tremendous job leading the organising committee for the 24 Hours of PASS event (Twitter: #24hop). We’ve just been going through the list of speakers and their preferences for time slots, and hopefully we’ve kept everyone fairly happy. All the submitted sessions (59 of them) were put up for a vote, and over a thousand of you picking your favourites. The top 28 sessions as voted were all included (24 sessions plus 4 reserves), and duplicates (when a single presenter had two sessions in the top 28) were swapped out for others. For example, both sessions submitted by Cindy Gross were in the top 28. These swaps were chosen by the committee to get a good balance of topics. Amazingly, some big names missed out, and even the top ten included some surprises. T-SQL, Indexes and Reporting featured well in the top ten, and in the end, the mix between BI, Dev and DBA ended up quite nicely too. The ten most voted-for sessions were (in order): Jennifer McCown - T-SQL Code Sins: The Worst Things We Do to Code and Why Michelle Ufford - Index Internals for Mere Mortals Audrey Hammonds - T-SQL Awesomeness: 3 Ways to Write Cool SQL Cindy Gross - SQL Server Performance Tools Jes Borland - Reporting Services 201: the Next Level Isabel de la Barra - SQL Server Performance Karen Lopez - Five Physical Database Design Blunders and How to Avoid Them Julie Smith - Cool Tricks to Pull From Your SSIS Hat Kim Tessereau - Indexes and Execution Plans Jen Stirrup - Dashboards Design and Practice using SSRS I think you’ll all agree this is shaping up to be an excellent event.

    Read the article

  • How to merge .rpmnew files in Pluggable Authentication Modules (PAM)?

    - by Question Overflow
    A few .rpmnew files are being created after performing an upgrade of the Fedora OS. The normal procedure for merging .rpmnew files into the original ones is to compare the differences, make the necessary changes to the configuration on the .rpmnew files, and replace the original files with the new ones. However, the files contained in /etc/pam.d are links to files with same the filename appended with -ac, example: password-auth links to password-auth-ac and has password-auth.rpmnew as upgrade. How do I go about merging these files?

    Read the article

  • Wise settings for Git

    - by Marko Apfel
    These settings reflecting my Git-environment. It a result of reading and trying several ideas of input from others. Must-Haves Aliases [alias] ci = commit st = status co = checkout oneline = log --pretty=oneline br = branch la = log --pretty=\"format:%ad %h (%an): %s\" --date=short df = diff dc = diff --cached lg = log -p lol = log --graph --decorate --pretty=oneline --abbrev-commit lola = log --graph --decorate --pretty=oneline --abbrev-commit --all ls = ls-files ign = ls-files -o -i --exclude-standard Colors [color] ui = auto [color "branch"] current = yellow reverse local = yellow remote = green [color "diff"] meta = yellow bold frag = magenta bold old = red bold new = green bold whitespace = red reverse [color "status"] added = green changed = red untracked = cyan Core [core] autocrlf = true excludesfile = c:/Users/<user>/.gitignore editor = 'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession –noPlugin Nice to have Merge and Diff [merge] tool = kdiff3 [mergetool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [mergetool "p4merge"] path = c:/Program Files (x86)/Perforce Merge/p4merge.exe cmd = p4merge \"$BASE\" \"$LOCAL\" \"$REMOTE\" \"$MERGED\" keepTemporaries = false trustExitCode = false keepBackup = false [diff] guitool = kdiff3 [difftool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [difftool "p4merge"] path = C:/Users/<user>/My Applications/Perforce Merge/p4merge.exe cmd = \"p4merge.exe $LOCAL $REMOTE\" .

    Read the article

  • SSMS Tools Pack 2.5 is out. Added support for SQL Server 2012.

    - by Mladen Prajdic
    Because I wanted to make SSMS Tools Pack as solid as possible for SSMS 2012 there are no new features only bug fixes and speed improvements. I am planning new awesome features for the next version so be on the lookout. The biggest change is that SSMS Tools Pack for SSMS 2012 is no longer free. For previous SSMS versions it is still free. Licensing now offers following options: Per machine license. ($29.99) Perfect if you do all your work from a single machine. This license is valid per major release of SSMS Tools Pack (e.g. v2.x, v3.x, v4.x). Fully transferable license valid for 3 months. ($99.99) Perfect for work across many machines. It's not bound to a machine or an SSMS Tools Pack version. 30 days license. Time based demo license bound to a machine. You can view all the details on the Licensing page. If you want to receive email notifications when new version of SSMS Tools Pack is out you can do that on the Main page or on the Download page. This is also the last version to support SSMS 2005 and 2005 Express. Enjoy it!

    Read the article

  • Big AdventureWorks2012

    - by jamiet
    Last week I launched AdventureWorks on Azure, an initiative to make SQL Azure accessible to anyone, in my blog post AdventureWorks2012 now available for all on SQL Azure. Since then I think its fair to say that the reaction has been lukewarm with 31 insertions into the [dbo].[SqlFamily] table and only 8 donations via PayPal to support it; on the other hand those 8 donators have been incredibly generous and we nearly have enough in the bank to cover a full year’s worth of availability. It was always my intention to try and make this offering more appealing and to that end I have used an adapted version of Adam Machanic’s make_big_adventure.sql script to massively increase the amount of data in the database and give the community more scope to really push SQL Azure and see what it is capable of. There are now two new tables in the database: [dbo].[bigProduct] with 25200 rows [dbo].[bigTransactionHistory] with 7827579 rows The credentials to login and use AdventureWorks on Azure are as they were before: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Remember, if you want to support AdventureWorks on Azure simply click here to launch a pre-populated PayPal Send Money form - all you have to do is login, fill in an amount, and click Send. We need more donations to keep this up and running so if you think this is useful and worth supporting, please please donate.   I mentioned that I had to adapt Adam’s script, the main reasons being: Cross-database queries are not yet supported in SQL Azure so I had to create a local copy of [dbo].[spt_values] rather than reference that in [master] SELECT…INTO is not supported in SQL Azure The 1GB limit of SQLAzure web edition meant that there would not be enough space to store all the data generated by Adam’s script so I had to decrease the total number of rows. The amended script is available on my SkyDrive at https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16756&parid=550F681DAD532637!16755 @Jamiet

    Read the article

  • Do you have Standard Operating Procedures in place for SQL?

    - by Jonathan Kehayias
    The last two weeks, I have been Active Duty for the Army completing the last phase of BNCOC (Basic Non-Commissioned Officers Course) for my MOS (Military Occupational Specialty).  While attending this course a number of things stood out to me that have practical application in the civilian sector as well as in the military.  One of these is the necessity and purpose behind Standard Operating Procedures, or as we refer to them SOPs.  In the Army we have official doctrines, often in...(read more)

    Read the article

  • simple script to backup PostgreSQL database

    - by Mick
    Hello I write simple batch script to backup postgeSQL databases, but I find one strange problem whether the pg_dump command can specify a password? There is batch script: REM script to backup PostgresSQL databases @ECHO off FOR /f "tokens=1-4 delims=/ " %%i IN ("%date%") DO ( SET dow=%%i SET month=%%j SET day=%%k SET year=%%l ) SET datestr=%month%_%day%_%year% SET db1=opennms SET db2=postgres SET db3=sr_preproduction REM SET db4=sr_production ECHO datestr is %datestr% SET BACKUP_FILE1=D:\%db1%_%datestr%.sql SET FIlLENAME1=%db1%_%datestr%.sql SET BACKUP_FILE2=D:\%db2%_%datestr%.sql SET FIlLENAME2=%db2%_%datestr%.sql SET BACKUP_FILE3=D:\%db3%_%datestr%.sql SET FIlLENAME3=%db3%_%datestr%.sql SET BACKUP_FILE4=D:\%db14%_%datestr%.sql SET FIlLENAME4=%db4%_%datestr%.sql ECHO Backup file name is %FIlLENAME1% , %FIlLENAME2% , %FIlLENAME3% , %FIlLENAME4% ECHO off pg_dump -U postgres -h localhost -p 5432 %db1% > %BACKUP_FILE1% pg_dump -U postgres -h localhost -p 5432 %db2% > %BACKUP_FILE2% pg_dump -U postgres -h localhost -p 5432 %db3% > %BACKUP_FILE3% REM pg_dump -U postgres -h localhost -p 5432 %db4% > %BACKUP_FILE4% ECHO DONE ! Please give me advice Regards Mick

    Read the article

  • We're Subversion Geeks and we want to know the benefits of Mercurial

    - by Matt
    Having read I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS. I have a related follow up question. I read that question and read the recommended links and videos and I see the benefits but I don't see the overall mindshift people are talking about. Our team is of 8-10 developers that work on one large code base consisting of 60 projects. We use Subversion and have a main trunk. When a developer starts a new Fogbugz case they create a svn branch, do the work on the branch and when they're done they merge back to the trunk. Occasionally they may stay on the branch for an extended time and merge the trunk to the branch to pick up the changes. When I watched Linus talk about people creating a branch and never doing it again, that's not us at all. We create probably 50-100 branches a week without issue. The biggest challenge is the merging but we've gotten pretty good at that as well. I tend to merge by fogbugz case & checkin rather than the entire root of the branch. We never work remotely and we never make branches off of branches. If you're the only one working in that section of the code base then the merge to the trunk goes smoothly. If someone else had modified the same section of code then the merge can get messy and you might need to do some surgery. Conflicts are conflicts, I don't see how any system could get it right most of the time unless if was smart enough to understand the code. After creating a branch the following checkout of 60k+ files takes some time but that would be an issue with any source control system we'd use. Is there some benefit of any DVCS that we're not seeing that would be of great help to us?

    Read the article

  • ApplicationPoolIdentity IIS 7.5 to SQL Server 2008 R2 not working.

    - by Jack
    I have a small ASP.NET test script that opens a connection to a SQL Server database on another machine in the domain. It isn't working in all cases. Setup: IIS 7.5 under W2K8R2 trying to connect to a remote SQL Server 2008 R2 instance. All machines are in the same domain. Using the ApplicationPoolIdentity for the web site it fails to connect to the SQL Server with the following: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. However if I switch the Process Model Identity to NETWORK SERVICE or my domain account the database connection is successful. I've granted the \$ access in SQL Server. I am not doing any sort of authentication on the web site, it is just a simple script to open a connection to a database to make sure it works. I have Anonymous Authentication enabled and set to use the Application pool identity. How do I make this work? Why is the ApplicationPoolIdentity trying to use ANONYMOUS LOGON? Better yet, how do I make it stop using the Anonymous logon?

    Read the article

  • How are people using virtualisation with SQL Server? Part 3

    - by GavinPayneUK
    Monitoring After CPU, memory and storage, monitoring is the fourth thing which changes massively according to Brent Ozar’s list when you move to virtualisation. Some of the performance counters we used to organise our lives around become meaningless and the performance of the host server is often over-looked when looking for problems. What’s encouraging here is that the majority of people are already looking beyond the performance of virtual server and at the performance of the host server. This...(read more)

    Read the article

  • How do I populate multiple records of data into a PDF form like a mail-merge?

    - by user38801
    I have Acrobat Pro, and I have a PDF with a form on it. Assuming the fields in the form correspond to a data source (like rows in an RDBMS table or xml file), I want to then print multiple copies of the PDF file, with each copy having the values of a different row in the data source. It is preferable to directly interface with an actual database, rather than having to save an XML file every time I do this. If this involves programming that's cool too, I only posted here because the question didn't seem appropriate for StackOverflow. Thanks!

    Read the article

  • Windows Azure Error: Failed to start Storage Emulator: the SQL Server instance ‘localhost\SQLExpress’ could not be found

    - by DigiMortal
    When running some of your Windows Azure applications when storage emulator is not configured you may get the following error: "Windows Azure Tools: Failed to initialize Windows Azure storage emulator. Unable to start Development Storage. Failed to start Storage Emulator: the SQL Server instance ‘localhost\SQLExpress’ could not be found.   Please configure the SQL Server instance for Storage Emulator using the ‘DSInit’ utility in the Windows Azure SDK.". Here’s how to solve this problem. You need to run DSInit utility to create database. For Windows Azure SDK 1.6 the location for DSInit utility is: C:\Program Files\Windows Azure Emulator\emulator\devstore By default DSInit expects that your database server is (local)\SQLEXPRESS but you can change it easily. If you have MSSQL instance called SQLEXPRESS then it is enough to just run DSInit. If you need some other instance then run the following command: DSInit /sqlinstance:<instance name> For default instance use “.” as instance name: DSInit /sqlinstance:. You can find more information about sqlinstance and other switches from DSInit documentation. If database was correctly created you should see dialog like this: When storage database is ready you can run your application.

    Read the article

  • You cannot do cross joins in SQL Azure but there is a way around that....

    - by SeanBarlow
    So I was asked today how to do cross joins in SQL Azure using Linq. Well the simple answer is you cant do it. It is not supported but there are ways around that. The solution is actually very simple and easy to implement. So here is what I did and how I did it. I created two SQL Azure Databases. The first Database is called AccountDb and has a single table named Account, which has an ID, CompanyId and Name in it. The second database I called CompanyDb and it contains two tables. The first table I named Company and the second I named Address. The Company Table has an Id and Name column. The Address Table has an Id and CompanyId columns. Since we cannot do cross joins in Azure we have to have one of the models preloaded with data. I simply put the Accounts into a List of accounts and use that in my join.   var accounts = new AccountsModelContainer().Accounts.ToList(); var companies = new CompanyModelContainer().Companies; var query = from account in accounts             join company in                 (                       from c in companies                      select c                  ) on account.CompanyId equals company.Id             select new AccountView() {                                               AccountName = account.Name, CompanyName = company.Name,                                 Addresses = company.Addresses                         }; return query.ToList();   So as long as you have your data loaded from one of the contexts you can still execute your queries and get the data back that you want.

    Read the article

  • Are you a SQLBits attendee? get a discount for PASS Europe SQL 2008 R2 Launch

    - by simonsabin
    PASS have given use a number of prizes for SQLBits. We have registrations for PASS Europe and PASS North America as well as DVD sets of the sessions from the North America summit last year to give away. Not only that, if you want to go to PASS Europe and you are a SQLBits user then you can get a promotion code that not only gives you the best price it also raises money for SQLBits. To get the promotion code login and then visit the community page http://www.sqlbits.com/about/Community.aspx  

    Read the article

  • Are you a SQLBits attendee? get a discount for PASS Europe SQL 2008 R2 Launch

    - by simonsabin
    PASS have given use a number of prizes for SQLBits. We have registrations for PASS Europe and PASS North America as well as DVD sets of the sessions from the North America summit last year to give away. Not only that, if you want to go to PASS Europe and you are a SQLBits user then you can get a promotion code that not only gives you the best price it also raises money for SQLBits. To get the promotion code login and then visit the community page http://www.sqlbits.com/about/Community.aspx  

    Read the article

  • ???11gR2??????~2011?1??????????(????)

    - by Yusuke.Yamamoto
    2011?1???????????(????)?????????????????? ???????????Oracle Database 10g Release 2(10gR2)? Microsoft Windows Server 2008 R2 ?? Windows 7 ????????? ????????????????????? Oracle Database 11gR2 ????????????????? 11gR2 ?????????????????????????????????????3????????????? ???????/??????! Oracle Database 11g R2 ??????? ?????????????SQL*Plus ???????????? ??????????????????????????????Tips?×4??????? SQL*Plus|??????????? ?? ???? ???? ??? ??? Oracle 10gR2?Windows 2008R2/Windows 7??????? ??? SQL Developer SQL Developer????~??!????????SQL???? ??? ????? Oracle ASM?1???? - ????????????·?????·?? ??? ??????? SQL*Loader???? ??? ?????? Oracle???????·??????(????)??(2010?10?) ??? ?????? Oracle???? ??~????????DBA1.0/2.0(2011?1?) New! ??? ??? Oracle Database ??OS/?????????????? ??? ??????? LIKE??(?????????)??????????~Oracle Text????? ??? 11gR2 ?Oracle DB 11gR2 ?????????????????????????????????????! New! ??? ??????? SQL*Plus??? - SQL??????????????????(????????Tips-3) New!

    Read the article

  • ??????????????????!?????

    - by Yuichi Hayashi
    OTN?????????????????????????????!?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? SQL?????? ?SQL??????????????????????????????????????SQL????????????????? ???????????!SQL????????? ??? Part1&2 ??(PDF) ??(WMV) ??(MP4) ???????????!SQL????????? ??? Part3 ??(PDF) ??(WMV) ??(MP4) ???????????!SQL????????? ??? Part4 ??(PDF) ??(WMV) ??(MP4) ???????????!SQL????????? ??? Part5 ??(PDF) ??(WMV) ??(MP4) ?????? ?SQL????????????????????????????????????????????????? ???????????!????????? Part1 ??(PDF) ??(WMV) ??(MP4) ???????????!????????? Part2 ??(PDF) ??(WMV) ??(MP4) ???????????!????????? Part3 ??(PDF) ??(WMV) ??(MP4) ???????????!????????? Part4 ??(PDF) ??(WMV) ??(MP4) ???? ??????????????????????????????????????? ???????????!??????? Part1 ??(PDF) ??(WMV) ??(MP4) ??????? ??????????????????????Tips??????????????? ???????????!?????????&??????? ??(PDF) ??(WMV) ??(MP4) ???????????!??????????????????? ??(PDF) ??(WMV) ??(MP4) ???????????!???????????????????????????? ??(PDF) ??(WMV) ??(MP4) ??????? ??????????????? ??????????????????????????Tips????????? ???????????!DB??????????????? ??(PDF) ??(WMV) ??(MP4) ???????????!??????????????Tips ??(PDF) ??(WMV) ??(MP4) ???? Oracle Database ???????????????????????????????????????????????????????????????? ???????????!??????? Part1 ??(PDF) ??(WMV) ??(MP4) ???????????!??????? Part2 ??(PDF) ??(WMV) ??(MP4) ???????????!??????? Part3 ??(PDF) ??(WMV) ??(MP4) ???????????!??????? Part4 ??(PDF) ??(WMV) ??(MP4) ???·??? ????????????????????·?????????????????????????????????????????????????????? ???????????!Exadata???????????????????Tips ??(PDF) ??(WMV) ??(MP4) ???????????!??TimesTen?????????? ??(PDF) ??(WMV) ??(MP4) ???????????!GoldenGate??????????????????? ??(PDF) ??(WMV) ??(MP4) ???????????!Oracle CEP?????????·??????????? ??(PDF) ??(WMV) ??(MP4) ???????????!??????????????????WebLogic?????? ??(PDF) ??(WMV) ??(MP4)

    Read the article

  • Is Fast Enumeration messing with my text output?

    - by Dan Ray
    Here I am iterating through an array of NSDictionary objects (inside the parsed JSON response of the EXCELLENT MapQuest directions API). I want to build up an HTML string to put into a UIWebView. My code says: for (NSDictionary *leg in legs ) { NSString *thisLeg = [NSString stringWithFormat:@"<br>%@ - %@", [leg valueForKey:@"narrative"], [leg valueForKey:@"distance"]]; NSLog(@"This leg's string is %@", thisLeg); [directionsOutput appendString:thisLeg]; } The content of directionsOutput (which is an NSMutableString) contains ALL the values for [leg valueForKey:@"narrative"], wrapped up in parens, followed by a hyphen, followed by all the parenthesized values for [leg valueForKey:@"distance"]. So I put in that NSLog call... and I get the same thing there! It appears that the for() is somehow batching up our output values as we iterate, and putting out the output only once. How do I make it not do this but instead do what I actually want, which is an iterative output as I iterate? Here's what NSLog gets. Yes, I know I need to figure out NSNumberFormatter. ;-) This leg's string is ( "Start out going NORTH on INFINITE LOOP.", "Turn LEFT to stay on INFINITE LOOP.", "Turn RIGHT onto N DE ANZA BLVD.", "Merge onto I-280 S toward SAN JOSE.", "Merge onto CA-87 S via EXIT 3A.", "Take the exit on the LEFT.", "Merge onto CA-85 S via EXIT 1A on the LEFT toward GILROY.", "Merge onto US-101 S via EXIT 1A on the LEFT toward LOS ANGELES.", "Take the CA-152 E/10TH ST exit, EXIT 356.", "Turn LEFT onto CA-152/E 10TH ST/PACHECO PASS HWY. Continue to follow CA-152/PACHECO PASS HWY.", "Turn SLIGHT RIGHT.", "Turn SLIGHT RIGHT onto PACHECO PASS HWY/CA-152 E. Continue to follow CA-152 E.", "Merge onto I-5 S toward LOS ANGELES.", "Take the CA-46 exit, EXIT 278, toward LOST HILLS/WASCO.", "Turn LEFT onto CA-46/PASO ROBLES HWY. Continue to follow CA-46.", "Merge onto CA-99 S toward BAKERSFIELD.", "Merge onto CA-58 E via EXIT 24 toward TEHACHAPI/MOJAVE.", "Merge onto I-15 N via the exit on the LEFT toward I-40/LAS VEGAS.", "Keep RIGHT to take I-40 E via EXIT 140A toward NEEDLES (Passing through ARIZONA, NEW MEXICO, TEXAS, OKLAHOMA, and ARKANSAS, then crossing into TENNESSEE).", "Merge onto I-40 E via EXIT 12C on the LEFT toward NASHVILLE (Crossing into NORTH CAROLINA).", "Merge onto I-40 BR E/US-421 S via EXIT 188 on the LEFT toward WINSTON-SALEM.", "Take the CLOVERDALE AVE exit, EXIT 4.", "Turn LEFT onto CLOVERDALE AVE SW.", "Turn SLIGHT LEFT onto N HAWTHORNE RD.", "Turn RIGHT onto W NORTHWEST BLVD.", "1047 W NORTHWEST BLVD is on the LEFT." ) - ( 0.0020000000949949026, 0.07800000160932541, 0.14000000059604645, 7.827000141143799, 5.0329999923706055, 0.15299999713897705, 5.050000190734863, 20.871000289916992, 0.3050000071525574, 2.802999973297119, 0.10199999809265137, 37.78000259399414, 124.50700378417969, 0.3970000147819519, 25.264001846313477, 20.475000381469727, 125.8580093383789, 4.538000106811523, 1693.0350341796875, 628.8970336914062, 3.7990000247955322, 0.19099999964237213, 0.4099999964237213, 0.257999986410141, 0.5170000195503235, 0 )

    Read the article

  • How do I merge a transient entity with a session using Castle ActiveRecordMediator?

    - by Daniel T.
    I have a Store and a Product entity: public class Store { public Guid Id { get; set; } public int Version { get; set; } public ISet<Product> Products { get; set; } } public class Product { public Guid Id { get; set; } public int Version { get; set; } public Store ParentStore { get; set; } public string Name { get; set; } } In other words, I have a Store that can contain multiple Products in a bidirectional one-to-many relationship. I'm sending data back and forth between a web browser and a web service. The following steps emulates the communication between the two, using session-per-request. I first save a new instance of a Store: using (new SessionScope()) { // this is data from the browser var store = new Store { Id = Guid.Empty }; ActiveRecordMediator.SaveAndFlush(store); } Then I grab the store out of the DB, add a new product to it, and then save it: using (new SessionScope()) { // this is data from the browser var product = new Product { Id = Guid.Empty, Name = "Apples" }); var store = ActiveRecordLinq.AsQueryable<Store>().First(); store.Products.Add(product); ActiveRecordMediator.SaveAndFlush(store); } Up to this point, everything works well. Now I want to update the Product I just saved: using (new SessionScope()) { // data from browser var product = new Product { Id = Guid.Empty, Version = 1, Name = "Grapes" }; var store = ActiveRecordLinq.AsQueryable<Store>().First(); store.Products.Add(product); // throws exception on this call ActiveRecordMediator.SaveAndFlush(store); } When I try to update the product, I get the following exception: a different object with the same identifier value was already associated with the session: 00000000-0000-0000-0000-000000000000, of entity:Product" As I understand it, the problem is that when I get the Store out of the database, it also gets the Product that's associated with it. Both entities are persistent. Then I tried to save a transient Product (the one that has the updated info from the browser) that has the same ID as the one that's already associated with the session, and an exception is thrown. However, I don't know how to get around this problem. If I could get access to a NHibernate.ISession, I could call ISession.Merge() on it, but it doesn't look like ActiveRecordMediator has anything similar (SaveCopy threw the same exception). Does anyone know the solution? Any help would be greatly appreciated!

    Read the article

< Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >