Search Results

Search found 3163 results on 127 pages for 'schema'.

Page 10/127 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • Should I use a unit testing framework to validate XML documents?

    - by christofr
    From http://www.w3.org/XML/Schema: [XML Schemas] provide a means for defining the structure, content and semantics of XML documents. I'm using an XML Schema (XSD) to validate several large XML documents. While I'm finding plenty of support within XSD for checking the structure of my documents, there are no procedural if/else features that allow me to say, for instance, If Country is USA, then Zipcode cannot be empty. I'm comfortable using unit testing frameworks, and could quite happily use a framework to test content integrity. Am I asking for trouble doing it this way, rather than an alternative approach? Has anybody tried this with good / bad results? -- Edit: I didn't include this information to keep it technology agnostic, but I would be using C# / Linq / xUnit for deserialization / testing.

    Read the article

  • Azure Table&ndash;Entities having different Schema (Implementation Approach)

    - by kaleidoscope
    Below is the approach that can be implemented whenever there is a requirement of creating an Azure Table having entities with different schema definitions.   We can have a Parent Entity defined which will hold the data common in all the entity types and then rest all entities should inherit from this parent class. There will be only on DataServiceContext class which will accept the object of the Parent class and this can be used for CRUD operations of all the entities. Hope this approach helps! Thanks. Technorati Tags: Azure Table,Geeta

    Read the article

  • ERROR_PROCEDURE Does Not Return a Schema Name

    "A recent blog entry I read reminded me again that I wanted to rant about an issue in SQL Server for quite some time now. SQL Server 2005 introduced the separation between user and schema. Though schemata already existed before SQL Server 2005, they really became usable with this version, imho. At the same time TRY...CATCH was a new way for structured error handling introduced. And so it finally became possible…" NEW! SQL Monitor 2.0Monitor SQL Server Central's servers withRed Gate's new SQL Monitor.No installation required. Find out more.

    Read the article

  • Synchronizing Mysql Table Schema [on hold]

    - by user1122069
    I have some difficulty keeping track of my SQL changes in a text file in SVN. One solution that I am aware of is to put the SQL queries in files (1.sql, 2.sql...) and to manually load each file at the proper time. Besides missing commas, the process is too cumbersome when builds become more frequent. I have actually taken to asking a co-worker to send me his SQL changes on Skype and we just apply the changes immediately on our local, development, and production servers (using three PhpMyAdmin tabs). I have seen several GUI tools mentioned in similar questions on SO, but these are actually more work and less automated than the aforementioned methods. Is there any standardized process by which this is done in an automated way? I can only guess that large companies build their own mechanisms of keeping track of database schema changes (yet I can't find a word about it - maybe they use files?). This question was closed as off-topic on Stack Overflow, so I am re-posting it here.

    Read the article

  • trouble resolving location in <xs:import > element in C#

    - by BobC
    I'm using an XML schema document to validate incoming data documents, however the schema appears be failing during compilation at run time because it refers to a complex type which part of an external schema. The external schema is specified in a element at the top of the document. I had thought it might be an access problem, so I moved a copy of the external document to a localhost folder. I get the same error, so now I'm wondering if there might be some sort of issue with the use of the element. The schema document fragment looks like this: <xs:schema targetNamespace="http://www.smpte-ra.org/schemas/429-7/2006/CPL" xmlns:cpl="http://www.smpte-ra.org/schemas/429-7/2006/CPL" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified"> ... <xs:import namespace="http://www.w3.org/2000/09/xmldsig#" schemaLocation="http://localhost/TMSWebServices/XMLSchema/xmldsig-core-schema.xsd"/> ... <xs:element name="Signer" type="ds:KeyInfoType" minOccurs="0"/> ... </xs:schema> The code I'm trying to run this with is real simple (got it from http://dotnetslackers.com/Community/blogs/haissam/archive/2008/11/06/validate-xml-against-xsd-xml-schema-using-c.aspx) string XSDFILEPATH = @"http://localhost/TMSWebServices/XMLSchema/CPL.xsd"; string XMLFILEPATH = @"C:\foo\bar\files\TestCPLs\CPL_930f5e92-be03-440c-a2ff-a13f3f16e1d6.xml"; System.Xml.XmlReaderSettings settings = new System.Xml.XmlReaderSettings(); settings.Schemas.Add(null, XSDFILEPATH); settings.ValidationType = System.Xml.ValidationType.Schema; System.Xml.XmlDocument document = new System.Xml.XmlDocument(); document.Load(XMLFILEPATH); System.Xml.XmlReader rdr = System.Xml.XmlReader.Create(new StringReader(document.InnerXml), settings); while (rdr.Read()) { } Everything goes well until the line that instantiates the XMLReader object just before the while loop. Then it fails with a type not declared error. The type that it's trying to find, KeyInfoType, is defined in one of the the documents in the import element. I've made sure the namespaces line up. I wondered if the # signs in the namespace definitions were causing a problem, but removing them had no effect, it just changed what the error looked like (i.e. "Type 'http://www.w3.org/2000/09/xmldsig:KeyInfoType' is not declared." versus "Type 'http://www.w3.org/2000/09/xmldsig#:KeyInfoType' is not declared.") My suspicion is that there's something about the processing of the element that I'm missing. Any suggestions are very welcome. Thanks!

    Read the article

  • How to define multiple elements in XML Schema with the same name and different attribute value allow

    - by David Skyba
    I would like to create XML Schema for this chunk of xml, I would like to restrict the values of "name" attribute, so that in output document on and only one instance of day is allowed for each week day: <a> <day name="monday" /> <day name="tuesday" /> <day name="wednesday" /> </a> I have tried to use this: <xs:complexType name="a"> <xs:sequence> <xs:element name="day" minOccurs="1" maxOccurs="1"> <xs:complexType> <xs:attribute name="name" use="required"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="monday" /> </xs:restriction> </xs:simpleType> </xs:attribute> </xs:complexType> </xs:element> <xs:element name="day" minOccurs="1" maxOccurs="1"> <xs:complexType> <xs:attribute name="name" use="required"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="tuesday" /> </xs:restriction> </xs:simpleType> </xs:attribute> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> but XML Schema validator in eclipse says error "Multiple elements with name 'day', with different types, appear in the model group.". Is there any other way?

    Read the article

  • Oracle Database Security Protecting the Oracle IRM Schema

    - by Simon Thorpe
    Acquiring the Information Rights Management technology in 2006 was part of Oracle's strategic security vision and IRM compliments nicely the overall Oracle security set of solutions. A year ago I spoke about how Oracle has solutions that can help companies protect information throughout its entire life cycle. With our acquisition of Sun this set of solutions has solidified and has even extended down to the operating system and hardware level. Oracle can now offer customers technology that protects their data from the disk, through the database to documents on the desktop! With the recent release of Oracle IRM 11g I was tasked to configure demonstration and evaluation environments and I thought it would make a nice story to leverage some of the security features in the latest release of the Oracle Database. After building these environments I thought I would put together a simple video demonstrating how both Database Advanced Security and Information Rights Management combined can provide a very secure platform for protecting your information. Have a look at the following which highlights these database security options.Transparent Data Encryption protecting the communication from the Oracle IRM server to the Database server. Encryption techniques provide confidentiality and integrity of the data passing to and from the IRM service on the back end. Transparent Data Encryption protecting the Oracle IRM database schema. Encryption is used to provide confidentiality of the IRM data whilst it resides at rest in the database table space. Database Vault is used to ensure only the Oracle IRM service has access to query and update the information that resides in the database. This is an excellent method of ensuring that database administrators cannot look at or make changes to the Oracle IRM database whilst retaining their ability to administrate the database. The last thing you want after deploying an IRM solution is for a curious or unhappy DBA to run a query that grants them rights to your company financial data or documents pertaining to a merger or acquisition.

    Read the article

  • Handling element collisions on importing/including XML schemas

    - by eggyal
    Given schema definitions that define the same element differently, can one import/include both definitions and reference them independently from within a third schema definition? For example, given: <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:example:namespace"> <element name="message" type="boolean"/> </schema> and: <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:example:namespace"> <element name="message" type="date"/> </schema> Can one construct the following: <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:example:namespace"> <complexType name="booleanMessageType"> <xs:sequence> <!-- reference to first definition here --> </xs:sequence> </complexType> <complexType name="dateMessageType"> <xs:sequence> <!-- reference to second definition here --> </xs:sequence> </complexType> </schema>

    Read the article

  • Spring 3.0: Unable to locate Spring NamespaceHandler for XML schema namespace

    - by Nick Hristov
    My setup is fairly simple: I have a web front-end, back-end is spring-wired. I am using AOP to add a layer of security on my rpc services. It's all good, except for the fact that the web app aborts on launch: [java] SEVERE: Context initialization failed [java] org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/schema/aop] [java] Offending resource: ServletContext resource [/WEB-INF/gwthandler-servlet.xml] Here is the snippet from my xml config file: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd"> <aop:config> <aop:aspect id="security" ref="securityAspect" > <aop:pointcut id="securedServices" expression="@annotation(com.fb.boog.common.aspects.Secured)"/> <aop:before method="checkSecurity" pointcut-ref="securedServices"/> </aop:aspect> </aop:config> I read over the internets that it may be my classloading the core of the problem. Doubtful, since here is my WEB-INF/lib directory: ./WEB-INF/lib ./WEB-INF/lib/aopalliance-alpha1.jar ./WEB-INF/lib/aspectj-1.6.6.jar ./WEB-INF/lib/commons-collections.jar ./WEB-INF/lib/commons-logging.jar ./WEB-INF/lib/ehcache-core-1.7.0.jar ./WEB-INF/lib/ejb3-persistence.jar ./WEB-INF/lib/hibernate ./WEB-INF/lib/hibernate/antlr.jar ./WEB-INF/lib/hibernate/asm.jar ./WEB-INF/lib/hibernate/bsh-2.0b1.jar ./WEB-INF/lib/hibernate/cglib.jar ./WEB-INF/lib/hibernate/dom4j.jar ./WEB-INF/lib/hibernate/freemarker.jar ./WEB-INF/lib/hibernate/hibernate-annotations.jar ./WEB-INF/lib/hibernate/hibernate-shards.jar ./WEB-INF/lib/hibernate/hibernate-tools.jar ./WEB-INF/lib/hibernate/hibernate.jar ./WEB-INF/lib/hibernate/jtidy-r8-20060801.jar ./WEB-INF/lib/jabsorb ./WEB-INF/lib/jabsorb/jabsorb-1.3.1.jar ./WEB-INF/lib/jta.jar ./WEB-INF/lib/jyaml-1.3.jar ./WEB-INF/lib/postgresql-8.4-701.jdbc4.jar ./WEB-INF/lib/sjsxp.jar ./WEB-INF/lib/spring ./WEB-INF/lib/spring/org.springframework.aop-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.asm-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.aspects-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.beans-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.context-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.context.support-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.core-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.expression-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.instrument-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.instrument.tomcat-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.jdbc-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.jms-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.orm-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.oxm-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.test-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.transaction-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.web-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.web.portlet-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.web.servlet-3.0.0.RELEASE.jar ./WEB-INF/lib/spring/org.springframework.web.struts-3.0.0.RELEASE.jar ./WEB-INF/lib/testng-5.11-jdk15.jar ./WEB-INF/web.xml

    Read the article

  • hiberate named query variable schema name

    - by Sandeep Jindal
    Hi, Other than default schema, for some SQL queries I need to access a particular schema. The issue is that the name of that particular schema is different for different environments. After goggling I found that using this link I am able to specify the schema name in variable. If that’s true that I have following questions: Will that would for SQL queries in the named query? How to set the value for the variable names? Thanks in advance.

    Read the article

  • SQL Server schema-owner permissions

    - by Andrew Bullock
    if i do: CREATE SCHEMA [test] AUTHORIZATION [testuser] testuser doesn't seem to have any permissions on the schema, is this correct? I thought as the principal that owns the schema, you had full control over it? What permission do i need to grant testuser so that it has full control over the test schema only? Edit: by "full control" i mean the ability to CRUD tables, views, sprocs etc Thanks

    Read the article

  • SQL/Schema comparison and upgrade

    - by Workshop Alex
    I have a simple situation. A large organisation is using several different versions of some (desktop) application and each version has it's own database structure. There are about 200 offices and each office will have it's own version, which can be one of 7 different ones. The company wants to upgrade all applications to the latest versions, which will be version 8. The problem is that they don't have a separate database for each version. Nor do they have a separate database for each office. They have one single database which is handled by a dedicated server, thus keeping things like management and backups easier. Every office has it's own database schema and within the schema there's the whole database structure for their specific application version. As a result, I'm dealing with 200 different schema's which need to be upgraded, each with 7 possible versions. Fortunately, every schema knows the proper version so checking the version isn't difficult. But my problem is that I need to create upgrade scripts which can upgrade from version 1 to version 2 to version 3 to etc... Basically, all schema's need to be bumped up one version until they're all version 8. Writing the code that will do this is no problem. the challenge is how to create the upgrade script from one version to the other? Preferably with some automated tool. I've examined RedGate's SQL Compare and Altova's DatabaseSpy but they're not practical. Altova is way too slow. RedGate requires too much processing afterwards, since the generated SQL Script still has a few errors and it refers to the schema name. Furthermore, the code needs to become part of a stored procedure and the code generated by RedGate doesn't really fit inside a single procedure. (Plus, it's doing too much transaction-handling, while I need everything within a single transaction. I have been considering using another SQL Comparison tool but it seems to me that my case is just too different from what standard tools can deliver. So I'm going to write my own comparison tool. To do this, I'll be using ADOX with Delphi to read the catalogues for every schema version in the database, then use this to write the SQL Statements that will need to upgrade these schema's to their next version. (Comparing 1 with 2, 2 with 3, 3 with 4, etc.) I'm not unfamiliar with generating SQL-Script-Generators so I don't expect too many problems. And I'll only be upgrading the table structures, not any of the other database objects. So, does anyone have some good tips and tricks to apply when doing this kind of comparisons? Things to be aware of? Practical tips to increase speed?

    Read the article

  • Generated queries contain schema and catalog name

    - by stacker
    I've the same problem as described here In the generated SQL Informix expects catalog:schema.table but what's actually generated is catalog.schema.table which leads to a syntax error. Setting: hibernate.default_catalog= hibernate.default_schema= had no effect. I even removed schema and catalog from the table annotation, this caused a different issues : the query looked like that ..table same for setting catalog and schema to an empty string. Versions seam 2.1.2 Hibernate Annotations 3.3.1.GA.CP01 Hibernate 3.2.4.sp1.cp08 Hibernate EntityManager 3.3.2.GAhibernate Jboss 4.3 (similar to 4.2.3)

    Read the article

  • How do I create self-relationships in polymorphic inheritance in Elixir and Pylons?

    - by Turukawa
    I am new to programming and am following the example in the Pylons documentation on creating a Wiki. The database I want to link to the wiki was created with Elixir so I rewrote the Wiki database schema and have continued from there. In the wiki there is a requirement for a Navigation table which is inherited by Pages and Sections. A section can have many pages, while a page can only have one section. In addition, each sibling node can be chain-referenced to each other. So: Nav has "section" (OneToMany) and "before" (OneToOne - to reference preceeding node) Page has "section" (ManyToOne - many pages in one section) and inherits "before" Section inherits all from Nav The code I've written looks like this: class Nav(Entity): using_options(inheritance='multi') name = Field(Unicode(30), default=u'Untitled Node') path = Field(Unicode(255), default=u'') section = OneToMany('Page', inverse='section') after = OneToOne('Nav', inverse='before') before = OneToMany('Nav', inverse='after') class Page(Nav): using_options(inheritance='multi') content = Field(UnicodeText, nullable=False) posted = Field(DateTime, default=now()) title = Field(Unicode(255), default=u'Untitled Page') heading = Field(Unicode(255)) tags = ManyToMany('Tag') comments = OneToMany('Comment') section = ManyToOne('Nav', inverse='section') class Section(Nav): using_options(inheritance='multi') Errors received on this: sqlalchemy.exc.OperationalError: (OperationalError) table nav has no column named aftr_id u'INSERT INTO nav (name, path, aftr_id, row_type) VALUES (?, ?, ?, ?)' I've also tried: before = ManyToMany('Nav', inverse='before') on Nav in the hopes this might break the problem, but also not. The original SQLAlchemy code from the tutorial for these declarations is as follows: nav_table = schema.Table('nav', meta.metadata, schema.Column('id', types.Integer(), schema.Sequence('nav_id_seq', optional=True), primary_key=True), schema.Column('name', types.Unicode(255), default=u'Untitled Node'), schema.Column('path', types.Unicode(255), default=u''), schema.Column('section', types.Integer(), schema.ForeignKey('nav.id')), schema.Column('before', types.Integer(), default=None), schema.Column('type', types.String(30), nullable=False) ) page_table = schema.Table('page', meta.metadata, schema.Column('id', types.Integer, schema.ForeignKey('nav.id'), primary_key=True), schema.Column('content', types.Text(), nullable=False), schema.Column('posted', types.DateTime(), default=now), schema.Column('title', types.Unicode(255), default=u'Untitled Page'), schema.Column('heading', types.Unicode(255)), ) section_table = sa.Table('section', meta.metadata, schema.Column('id', types.Integer, schema.ForeignKey('nav.id'), primary_key=True), ) orm.mapper(Nav, nav_table, polymorphic_on=nav_table.c.type, polymorphic_identity='nav') orm.mapper(Section, section_table, inherits=Nav, polymorphic_identity='section') orm.mapper(Page, page_table, inherits=Nav, polymorphic_identity='page', properties={ 'comments':orm.relation(Comment, backref='page', cascade='all'), 'tags':orm.relation(Tag, secondary=pagetag_table) }) Any help is much appreciated.

    Read the article

  • Muti-Schema Privileges for a Table Trigger in an Oracle Database

    - by sisslack
    I'm trying to write a table trigger which queries another table that is outside the schema where the trigger will reside. Is this possible? It seems like I have no problem querying tables in my schema but I get: Error: ORA-00942: table or view does not exist when trying trying to query tables outside my schema. EDIT My apologies for not providing as much information as possible the first time around. I was under the impression this question was more simple. I'm trying create a trigger on a table that changes some fields on a newly inserted row based on the existence of some data that may or may not be in a table that is in another schema. The user account that I'm using to create the trigger does have the permissions to run the queries independently. In fact, I've had my trigger print the query I'm trying to run and was able to run it on it's own successfully. I should also note that I'm building the query dynamically by using the EXECUTE IMMEDIATE statement. Here's an example: CREATE OR REPLACE TRIGGER MAIN_SCHEMA.EVENTS BEFORE INSERT ON MAIN_SCHEMA.EVENTS REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW DECLARE rtn_count NUMBER := 0; table_name VARCHAR2(17) := :NEW.SOME_FIELD; key_field VARCHAR2(20) := :NEW.ANOTHER_FIELD; BEGIN CASE WHEN (key_field = 'condition_a') THEN EXECUTE IMMEDIATE 'select count(*) from OTHER_SCHEMA_A.'||table_name||' where KEY_FIELD='''||key_field||'''' INTO rtn_count; WHEN (key_field = 'condition_b') THEN EXECUTE IMMEDIATE 'select count(*) from OTHER_SCHEMA_B.'||table_name||' where KEY_FIELD='''||key_field||'''' INTO rtn_count; WHEN (key_field = 'condition_c') THEN EXECUTE IMMEDIATE 'select count(*) from OTHER_SCHEMA_C.'||table_name||' where KEY_FIELD='''||key_field||'''' INTO rtn_count; END CASE; IF (rtn_count > 0) THEN -- change some fields that are to be inserted END IF; END; The trigger seams to fail on the EXECUTE IMMEDIATE with the previously mentioned error. EDIT I have done some more research and I can offer more clarification. The user account I'm using to create this trigger is not MAIN_SCHEMA or any one of the OTHER_SCHEMA_Xs. The account I'm using (ME) is given privileges to the involved tables via the schema users themselves. For example (USER_TAB_PRIVS): GRANTOR GRANTEE TABLE_SCHEMA TABLE_NAME PRIVILEGE GRANTABLE HIERARCHY MAIN_SCHEMA ME MAIN_SCHEMA EVENTS DELETE NO NO MAIN_SCHEMA ME MAIN_SCHEMA EVENTS INSERT NO NO MAIN_SCHEMA ME MAIN_SCHEMA EVENTS SELECT NO NO MAIN_SCHEMA ME MAIN_SCHEMA EVENTS UPDATE NO NO OTHER_SCHEMA_X ME OTHER_SCHEMA_X TARGET_TBL SELECT NO NO And I have the following system privileges (USER_SYS_PRIVS): USERNAME PRIVILEGE ADMIN_OPTION ME ALTER ANY TRIGGER NO ME CREATE ANY TRIGGER NO ME UNLIMITED TABLESPACE NO And this is what I found in the Oracle documentation: To create a trigger in another user's schema, or to reference a table in another schema from a trigger in your schema, you must have the CREATE ANY TRIGGER system privilege. With this privilege, the trigger can be created in any schema and can be associated with any user's table. In addition, the user creating the trigger must also have EXECUTE privilege on the referenced procedures, functions, or packages. Here: Oracle Doc So it looks to me like this should work, but I'm not sure about the "EXECUTE privilege" it's referring to in the doc.

    Read the article

  • XmlSchema.read throws exception when an element is declared nillable

    - by G33kKahuna
    I have a simple schema that I am trying to read using XmlSchema.Read() method. I keep getting The 'nillable' attribute is not supported in this context Here is a simple code in C# XmlSchema schema = null; using (StreamReader reader = new StreamReader(<Path to Schema file name>) { schema = XmlSchema.Read(reader.BaseStream, null); } Below is the schema <xs:schema xmlns:b="http://schemas.microsoft.com/BizTalk/2003" xmlns="http://xyz.com.schema.bc.mySchema" attributeFormDefault="unqualified" elementFormDefault="qualified" targetNamespace="http://xyz.com.schema.bc.mySchema" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="data"> <xs:complexType> <xs:sequence> <xs:element name="Component"> <xs:complexType> <xs:sequence> <xs:element minOccurs="1" maxOccurs="unbounded" name="row"> <xs:complexType> <xs:sequence> <xs:element name="changed_by" type="xs:string" nillable="true" /> <xs:element name="column_name" type="xs:string" nillable="true" /> <xs:element name="comment_text" type="xs:string" nillable="true" /> <xs:element name="is_approved" type="xs:string" nillable="true" /> <xs:element name="log_at" type="xs:dateTime" nillable="true" /> <xs:element name="new_val" type="xs:string" nillable="true" /> <xs:element name="old_val" type="xs:string" nillable="true" /> <xs:element name="person_id" type="xs:string" nillable="true" /> <xs:element name="poh_id" type="xs:string" nillable="true" /> <xs:element name="pol_id" type="xs:string" nillable="true" /> <xs:element name="search_name" type="xs:string" nillable="true" /> <xs:element name="unique_id" type="xs:integer" nillable="true" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:complexType> </xs:element> </xs:schema>

    Read the article

  • How do I load the Oracle schema into memory instead of the hard drive?

    - by Andrew
    I have a certain web application that makes upwards of ~100 updates to an Oracle database in succession. This can take anywhere from 3-5 minutes, which sometimes causes the webpage to time out. A re-design of the application is scheduled soon but someone told me that there is a way to configure a "loader file" which loads the schema into memory and runs the transactions there instead of on the hard drive, supposedly improving speed by several orders of magnitude. I have tried to research this "loader file" but all I can find is information about the SQL* bulk data loader. Does anyone know what he's talking about? Is this really possible and is it a feasible quick fix or should I just wait until the application is re-designed?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >