Search Results

Search found 61393 results on 2456 pages for 'data integration'.

Page 84/2456 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Oracle Announces Oracle Data Integrator 12c and Oracle GoldenGate 12c

    - by Roxana Babiciu
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. To help customers succeed, Oracle is enhancing its data integration offering with Oracle Data Integrator 12c and Oracle GoldenGate 12c. These flexible and comprehensive solutions help customers capitalize on their data to reduce costs and drive business growth. Read more here

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This HOL10233 - Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Nob Hill CD In this Hands-on Lab, learn how to integrate Oracle E-Business Suite with third-party applications in your ecosystem with Oracle Application Integration Architecture (AIA) Foundation Pack.This hands-on lab focuses on SOA-based integration with Oracle E-Business Suite, using Oracle E-Business Suite Integrated SOA Gateway or Oracle Applications Adapter to orchestrate a process across disparate applications in your ecosystem.Objectives for this session are to: Learn how to design and build an integration, from functional definition to development Learn how to automatically build services with robust error handling logic Learn how to discover and reuse services available in Oracle E-Business Suite

    Read the article

  • Instruction vs data cache usage

    - by Nick Rosencrantz
    Say I've got a cache memory where instruction and data have different cache memories ("Harvard architecture"). Which cache, instruction or data, is used most often? I mean "most often" as in time, not amount of data since data memory might be used "more" in terms of amount of data while instruction cache might be used "more often" especially depending on the program. Are there different answers a) in general and b) for a specific program?

    Read the article

  • Transparent Data Encryption

    Transparent Data Encryption is designed to protect data by encrypting the physical files of the database, rather than the data itself. Its main purpose is to prevent unauthorized access to the data by restoring the files to another server. With Transparent Data Encryption in place, this requires the original encryption certificate and master key. It was introduced in the Enterprise edition of SQL Server 2008. John Magnabosco explains fully, and guides you through the process of setting it up.

    Read the article

  • Tuesday at Oracle OpenWorld 2012 - Must See Session: “Oracle Fusion Applications: Best Practices in Integration Design Patterns”

    - by Lionel Dubreuil
    Don’t miss this “CON8685 - Oracle Fusion Applications: Best Practices in Integration Design Patterns “ session: Speakers: Rajesh Raheja - Senior Director, Development, Oracle Ravi Sankaran - Director, Applications Development, Oracle Date: Tuesday, Oct 2 Time: 1:15 PM - 2:15 PM Location: Palace Hotel - Telegraph Oracle Fusion Applications provide various ways to integrate their functional capabilities with other Oracle applications as well as third-party and legacy applications. In this session, you will learn the patterns used when communicating with Oracle Fusion Applications with a SOA approach. It addresses items related to identifying the integration artifacts available, also known as assets, in Oracle Enterprise Repository; how to invoke synchronous and asynchronous Web services; importing and exporting bulk data; and any integration issues to look out for. The patterns will be applicable to on-premises and SaaS/cloud deployment modes and are indicated as such. Objectives for this session are to: Highlight the various ways to integrate with Oracle Fusion Applications Showcase use of Oracle Fusion Middleware technologies for integration Describe best practices and design patterns for integration Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Google I/O 2012 - Spatial Data Visualization

    Google I/O 2012 - Spatial Data Visualization Brendan Kenny, Enoch Lau Maps were among the first data visualizations, but they can also provide the backdrop for visualizing your own spatial data. In this session, we'll take a voyage through the world of map based data visualization, arming you with the tools you need to most effectively bring your data to life on a map using the Maps API v3. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1053 26 ratings Time: 01:00:17 More in Science & Technology

    Read the article

  • Data Source Use of Oracle Edition Based Redefinition (EBR)

    - by Steve Felts
    Edition-based redefinition is a new feature in the 11gR2 release of the Oracle database. It enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time. It works by allowing for a pre-upgrade and post-upgrade view of the data to exist at the same time, providing a hot upgrade capability. You can then specify which view you want for a particular session.  See the Oracle Database Advanced Application Developer's Guide for further information. There is also a good white paper at Edition Based Definition. Using this feature of the Oracle database does not require any new WebLogic Server functionality. It is set for each connection in the pool automatically by simply specifying SQL ALTER SESSION SET EDITION = edition_name in the Init SQL parameter in the data source configuration. This can be configured either via the console or via WLST (setInitSQL on the JDBCConnectionPoolParams). This SQL statement is executed for each newly created physical database connection.Note that we are assuming that a data source references only one edition of the database. To make use of this feature, you would have an earlier version of the application with a data source that references the earlier EDITION and a later version of the application with a data source that references the later EDITION.   Once you start talking about multiple versions of a WLS application, you should be using the WLS "side-by-side" or "versioned" deployment feature.  See Developing Applications for Production Redeployment for more information.  By combining Oracle database EBR and WLS versioned deployment, the application can be failed over with no downtime, making the combination of features more powerful than either independently. There is a catch - you need to be running with a versioned database and a versioned application initially so then you can switch versions.  The recommended way to version a WLS application is to simply add the "Weblogic-Application-Version" property in the MANIFEST.MF file(you can also specify it at deployment time). The recommended way to configure the data source is to use a packaged data source descriptor that's stored in the ear or war so that everything is self-contained.  There are some restrictions.  You can't use a packaged data source with Logging Last Resource (LLR) - you need to use a system resource.  You can't use an application-scoped packaged data source with EmulateTwoPhaseCommit for the global-transactions-protocol with a versioned application - use a global scope.  See Configuring JDBC Application Modules for Deployment for more details. There's one known problem - it doesn't work correctly with an XA data source (patch available with bug 14075837).

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

  • Separating logic and data in browser game

    - by Tesserex
    I've been thinking this over for days and I'm still not sure what to do. I'm trying to refactor a combat system in PHP (...sorry.) Here's what exists so far: There are two (so far) types of entities that can participate in combat. Let's just call them players and NPCs. Their data is already written pretty well. When involved in combat, these entities are wrapped with another object in the DB called a Combatant, which gives them information about the particular fight. They can be involved in multiple combats at once. I'm trying to write the logic engine for combat by having combatants injected into it. I want to be able to mock everything for testing. In order to separate logic and data, I want to have two interfaces / base classes, one being ICombatantData and the other ICombatantLogic. The two implementers of data will be one for the real objects stored in the database, and the other for my mock objects. I'm now running into uncertainties with designing the logic side of things. I can have one implementer for each of players and NPCs, but then I have an issue. A combatant needs to be able to return the entity that it wraps. Should this getter method be part of logic or data? I feel strongly that it should be in data, because the logic part is used for executing combat, and won't be available if someone is just looking up information about an upcoming fight. But the data classes only separate mock from DB, not player from NPC. If I try having two child classes of the DB data implementer, one for each entity type, then how do I architect that while keeping my mocks in the loop? Do I need some third interface like IEntityProvider that I inject into the data classes? Also with some of the ideas I've been considering, I feel like I'll have to put checks in place to make sure you don't mismatch things, like making the logic for an NPC accidentally wrap the data for a player. Does that make any sense? Is that a situation that would even be possible if the architecture is correct, or would the right design prohibit that completely so I don't need to check for it? If someone could help me just layout a class diagram or something for this it would help me a lot. Thanks. edit Also useful to note, the mock data class doesn't really need the Entity, since I'll just be specifying all the parameters like combat stats directly instead. So maybe that will affect the correct design.

    Read the article

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

  • I have data that sends in "bursts" of 100 records with a significant delay. How do I structure my classes for multithreading?

    - by makerofthings7
    My datasource sends information in 100 batches of 100 records with a delay of 1 to 3 seconds between batches. I would like to start processing data as soon as it's received, but I'm not sure how to best approach this. Some ideas I've been playing with include: yield Concurrent Dictionary ConcurrentDictionary with INotifyProperyChanged Events etc. As you can see I'm all over the place, and would appreciate some tested guidance on how to approach this

    Read the article

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

  • Monday at Oracle OpenWorld 2012 - Must See Session: “Using the Right Tools, Techniques, and Technologies for Integration Projects”

    - by Lionel Dubreuil
    Don’t miss this “CON8669 - Using the Right Tools, Techniques, and Technologies for Integration Projects“ session with Timothy Hall - Sr. Director, Oracle: Date: Monday, Oct 1, Time: 3:15 PM - 4:15 PM Location: Moscone South - 308 Every integration project brings its own unique set of challenges. There are many tools and techniques to choose from. How do you ensure that you have a means of consistently and repeatedly making decisions about which tools, techniques, and technologies are used? In working with many customers around the globe, Oracle has developed a set of criteria to help evaluate a variety of common integration questions. This session explores these criteria and how they have been further organized into decision trees that offer a repeatable means for ensuring that project teams are given the same guidance from project to project. Using these techniques, the presentation shows how you can reduce risk and speed productivity for your projects Objectives for this session are to: Discuss common questions that arise at the start of integration projects Review various decision criteria and approaches for getting to a consistent set of answers Explore how these techniques can be used to reduce risk and speed productivity

    Read the article

  • Monday at Oracle OpenWorld 2012 - Must See Session: “Using the Right Tools, Techniques, and Technologies for Integration Projects”

    - by Lionel Dubreuil
    Don’t miss this “CON8669 - Using the Right Tools, Techniques, and Technologies for Integration Projects“ session with Timothy Hall - Sr. Director, Oracle: Date: Monday, Oct 1, Time: 3:15 PM - 4:15 PM Location: Moscone South - 308 Every integration project brings its own unique set of challenges. There are many tools and techniques to choose from. How do you ensure that you have a means of consistently and repeatedly making decisions about which tools, techniques, and technologies are used? In working with many customers around the globe, Oracle has developed a set of criteria to help evaluate a variety of common integration questions. This session explores these criteria and how they have been further organized into decision trees that offer a repeatable means for ensuring that project teams are given the same guidance from project to project. Using these techniques, the presentation shows how you can reduce risk and speed productivity for your projects Objectives for this session are to: Discuss common questions that arise at the start of integration projects Review various decision criteria and approaches for getting to a consistent set of answers Explore how these techniques can be used to reduce risk and speed productivity

    Read the article

  • ???????/??????????!! Oracle Data Integrator????????

    - by user788995
    ????? ??:2009/12/17 ??:??????/?? ??????????????????????????????????????????????????????? Oracle Data Integrator ????????????????????????????Oracle Data Integrator ?????????????????????????????????????????????????????????????? Oracle Data Integrator ??Oracle Data Integrator ??????????????Oracle Data Integrator ?????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/ODI_1217_1330.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/ODI_1217_1330.mp4 http://www.oracle.com/technetwork/jp/ondemand/db-technique/20091217-odi-knowhow-254853-ja.pdf

    Read the article

  • Integration testing - can it be done right?

    - by Max
    I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program? What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want? An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile. To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation. What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?

    Read the article

  • SQL Server Configuration timeouts - and a workaround [SSIS]

    - by jamiet
    Ever since I started writing SSIS packages back in 2004 I have opted to store configurations in .dtsConfig (.i.e. XML) files rather than in a SQL Server table (aka SQL Server Configurations) however recently I inherited some packages that used SQL Server Configurations and thus had to immerse myself in their murky little world. To all the people that have ever gone onto the SSIS forum and asked questions about ambiguous behaviour of SQL Server Configurations I now say this... I feel your pain! The biggest problem I have had was in dealing with the change to the order in which configurations get applied that came about in SSIS 2008. Those changes are detailed on MSDN at SSIS Package Configurations however the pertinent bits are: As the utility loads and runs the package, events occur in the following order: The dtexec utility loads the package. The utility applies the configurations that were specified in the package at design time and in the order that is specified in the package. (The one exception to this is the Parent Package Variables configurations. The utility applies these configurations only once and later in the process.) The utility then applies any options that you specified on the command line. The utility then reloads the configurations that were specified in the package at design time and in the order specified in the package. (Again, the exception to this rule is the Parent Package Variables configurations). The utility uses any command-line options that were specified to reload the configurations. Therefore, different values might be reloaded from a different location. The utility applies the Parent Package Variable configurations. The utility runs the package. To understand how these steps differ from SSIS 2005 I recommend reading Doug Laudenschlager’s blog post Understand how SSIS package configurations are applied. The very nature of SQL Server Configurations means that the Connection String for the database holding the configuration values needs to be supplied from the command-line. Typically then the call to execute your package resembles this: dtexec /FILE Package.dtsx /SET "\Package.Connections[SSISConfigurations].Properties[ConnectionString]";"\"Data Source=SomeServer;Initial Catalog=SomeDB;Integrated Security=SSPI;\"", The problem then is that, as per the steps above, the package will (1) attempt to apply all configurations using the Connection String stored in the package for the "SSISConfigurations" Connection Manager before then (2) applying the Connection String from the command-line and then (3) apply the same configurations all over again. In the packages that I inherited that first attempt to apply the configurations would timeout (not unexpected); I had 8 SQL Server Configurations in the package and thus the package was waiting for 2 minutes until all the Configurations timed out (i.e. 15seconds per Configuration) - in a package that only executes for ~8seconds when it gets to do its actual work a delay of 2minutes was simply unacceptable. We had three options in how to deal with this: Get rid of the use of SQL Server configurations and use .dtsConfig files instead Edit the packages when they get deployed Change the timeout on the "SSISConfigurations" Connection Manager #1 was my preferred choice but, for reasons I explain below*, wasn't an option in this particular instance. #2 was discounted out of hand because it negates the point of using Configurations in the first place. This left us with #3 - change the timeout on the Connection Manager. This is done by going into the properties of the Connection Manager, opening the "All" tab and changing the Connect Timeout property to some suitable value (in the screenshot below I chose 2 seconds). This change meant that the attempts to apply the SQL Server configurations timed out in 16 seconds rather than two minutes; clearly this isn't an optimum solution but its certainly better than it was. So there you have it - if you are having problems with SQL Server configuration timeouts within SSIS try changing the timeout of the Connection Manager. Better still - don't bother using SQL Server Configuration in the first place. Even better - install RC0 of SQL Server 2012 to start leveraging SSIS parameters and leave the nasty old world of configurations behind you. @Jamiet * Basically, we are leveraging a SSIS execution/logging framework in which the client had invested a lot of resources and SQL Server Configurations are an integral part of that.

    Read the article

  • Experience Oracle Enterprise Data Quality at OpenWorld 2012

    - by Mala Narasimharajan
    This year Enterprise Data Quality features sessions designed to cover topics above and beyond product strategy and roadmap. Specifically, we have planned to have sessions around data governance, how does EDQ enable data acquisition, migration as well as integration. In addition, check out our hands-on-lab to see for yourself the new enhancements and integration to EDQ. Finally, no OpenWorld is complete without fully exploring the DemoGrounds – come and learn how enterprise data quality is critical to enterprise success and fit-for-purpose data. SESSIONS Mon Oct 1   1:45 PM - 2:45 PM Oracle Enterprise Data Quality: Product Overview and Roadmap Moscone West – 3006 Wed Oct 3   1:15 PM - 2:15 PM Data Preparation and Ongoing Governance with the Oracle Enterprise Data Quality Platform                                                   Moscone West – 3000 Thurs – Oct 4   12:45 PM - 1:45 PM Data Acquisition, Migration, and Integration with the Oracle Enterprise Data Quality Platform                                                 Moscone West – 3005     HANDS ON LAB – Mon Oct 1   4:45 PM - 5:45 PM Introduction to Oracle Enterprise Data Quality Application                                                                                                                   Marriott Marquis - Salon ½ DEMO   Trusted Data with Oracle Enterprise Data Quality Moscone South, Right - S-243

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • SSIS - XML Source Script

    - by simonsabin
    The XML Source in SSIS is great if you have a 1 to 1 mapping between entity and table. You can do more complex mapping but it becomes very messy and won't perform. What other options do you have? The challenge with XML processing is to not need a huge amount of memory. I remember using the early versions of Biztalk with loaded the whole document into memory to map from one document type to another. This was fine for small documents but was an absolute killer for large documents. You therefore need a streaming approach. For flexibility however you want to be able to generate your rows easily, and if you've ever used the XmlReader you will know its ugly code to write. That brings me on to LINQ. The is an implementation of LINQ over XML which is really nice. You can write nice LINQ queries instead of the XMLReader stuff. The downside is that by default LINQ to XML requires a whole XML document to work with. No streaming. Your code would look like this. We create an XDocument and then enumerate over a set of annoymous types we generate from our LINQ statement XDocument x = XDocument.Load("C:\\TEMP\\CustomerOrders-Attribute.xml");   foreach (var xdata in (from customer in x.Elements("OrderInterface").Elements("Customer")                        from order in customer.Elements("Orders").Elements("Order")                        select new { Account = customer.Attribute("AccountNumber").Value                                   , OrderDate = order.Attribute("OrderDate").Value }                        )) {     Output0Buffer.AddRow();     Output0Buffer.AccountNumber = xdata.Account;     Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate); } As I said the downside to this is that you are loading the whole document into memory. I did some googling and came across some helpful videos from a nice UK DPE Mike Taulty http://www.microsoft.com/uk/msdn/screencasts/screencast/289/LINQ-to-XML-Streaming-In-Large-Documents.aspx. Which show you how you can combine LINQ and the XmlReader to get a semi streaming approach. I took what he did and implemented it in SSIS. What I found odd was that when I ran it I got different numbers between using the streamed and non streamed versions. I found the cause was a little bug in Mikes code that causes the pointer in the XmlReader to progress past the start of the element and thus foreach (var xdata in (from customer in StreamReader("C:\\TEMP\\CustomerOrders-Attribute.xml","Customer")                                from order in customer.Elements("Orders").Elements("Order")                                select new { Account = customer.Attribute("AccountNumber").Value                                           , OrderDate = order.Attribute("OrderDate").Value }                                ))         {             Output0Buffer.AddRow();             Output0Buffer.AccountNumber = xdata.Account;             Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate);         } These look very similiar and they are the key element is the method we are calling, StreamReader. This method is what gives us streaming, what it does is return a enumerable list of elements, because of the way that LINQ works this results in the data being streamed in. static IEnumerable<XElement> StreamReader(String filename, string elementName) {     using (XmlReader xr = XmlReader.Create(filename))     {         xr.MoveToContent();         while (xr.Read()) //Reads the first element         {             while (xr.NodeType == XmlNodeType.Element && xr.Name == elementName)             {                 XElement node = (XElement)XElement.ReadFrom(xr);                   yield return node;             }         }         xr.Close();     } } This code is specifically designed to return a list of the elements with a specific name. The first Read reads the root element and then the inner while loop checks to see if the current element is the type we want. If not we do the xr.Read() again until we find the element type we want. We then use the neat function XElement.ReadFrom to read an element and all its sub elements into an XElement. This is what is returned and can be consumed by the LINQ statement. Essentially once one element has been read we need to check if we are still on the same element type and name (the inner loop) This was Mikes mistake, if we called .Read again we would advance the XmlReader beyond the start of the Element and so the ReadFrom method wouldn't work. So with the code above you can use what ever LINQ statement you like to flatten your XML into the rowsets you want. You could even have multiple outputs and generate your own surrogate keys.        

    Read the article

  • Pass data to Child Window in Silverlight 4 using MVVM

    - by Archie
    Hello, I have a datagrid with master detail implementation as follows: <data:DataGrid x:Name="dgData" Width="600" ItemsSource="{Binding Path=ItemCollection}" HorizontalScrollBarVisibility="Hidden" CanUserSortColumns="False" RowDetailsVisibilityChanged="dgData_RowDetailsVisibilityChanged"> <data:DataGrid.Columns> <data:DataGridTextColumn Header="Item" Width="*" Binding="{Binding Item,Mode=TwoWay}"/> <data:DataGridTextColumn Header="Company" Width="*" Binding="{Binding Company,Mode=TwoWay}"/> </data:DataGrid.Columns> <data:DataGrid.RowDetailsTemplate> <DataTemplate> <data:DataGrid x:Name="dgrdRowDetail" Width="400" AutoGenerateColumns="False" HorizontalAlignment="Center" HorizontalScrollBarVisibility="Hidden" Grid.Row="1"> <data:DataGrid.Columns> <data:DataGridTextColumn Header="Date" Width="*" Binding="{Binding Date,Mode=TwoWay}"/> <data:DataGridTextColumn Header="Price" Width="*" Binding="{Binding Price, Mode=TwoWay}"/> <data:DataGridTemplateColumn> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button Content="Show More Details" Click="buttonShowDetail_Click"></Button> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> </DataTemplate> </data:DataGrid.RowDetailsTemplate> </data:DataGrid> I want to open a Child window in clicking the button which shows more details about the product. I'm using MVVM pattern. My Model contains a method which takes the Item name as input and retursn the Details data. My problem is how should I pass the Item to ViewModel which will get the Details data from Model? and where shoukd I open the new Child Window? In View or ViewModel? Please help.Thanks.

    Read the article

  • Date formatting using data annotations for a dataform in Silverlight

    - by Aim Kai
    This is probably got a simple answer to it, but I am having problems just formatting the date for a dataform field.. <df:DataForm x:Name="Form1" ItemsSource="{Binding Mode=OneWay}" AutoGenerateFields="True" AutoEdit="True" AutoCommit="False" CommitButtonContent="Save" CancelButtonContent="Cancel" CommandButtonsVisibility="Commit" LabelPosition="Top" ScrollViewer.VerticalScrollBarVisibility="Disabled" EditEnded="NoteForm_EditEnded"> <df:DataForm.EditTemplate> <DataTemplate> <StackPanel> <df:DataField> <TextBox Text="{Binding Title, Mode=TwoWay}"/> </df:DataField> <df:DataField> <TextBox Text="{Binding Description, Mode=TwoWay}" AcceptsReturn="True" HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto" Height="" TextWrapping="Wrap" SizeChanged="TextBox_SizeChanged"/> </df:DataField> <df:DataField> <TextBlock Text="{Binding Username}"/> </df:DataField> <df:DataField> <TextBlock Text="{Binding DateCreated}"/> </df:DataField> </StackPanel> </DataTemplate> </df:DataForm.EditTemplate> </df:DataForm> I have bound this to a note class which has the annotation for field DateCreated: /// <summary> /// Gets or sets the date created of the noteannotation /// </summary> [Display(Name="Date Created")] [Editable(false)] [DisplayFormat(DataFormatString = "{0:u}", ApplyFormatInEditMode = true)] public DateTime DateCreated { get; set; } Whatever I set the dataformatstring it comes back as: eg 4/6/2010 10:02:15 AM I want this formatted as yyyy-MM-dd HH:mm:ss I have tried the custom format above {0:yyyy-MM-dd hh:mm:ss} but it remains the same output. The same happens for {0:u} or {0:s}. Any help would be gratefully received. :)

    Read the article

  • Core Data: Keypath "objectID" not found in entity

    - by Martin Gordon
    I'm using NSFetchedResultsController with a predicate to load a list of Documents in my application. I want to load all the Documents except the currently active one. I am using Rentzsch's MOGenerator to create a _Document class and then I put all my custom code in the Document subclass. _Document generates an objectID property with type DocumentID. In the class that creates the controller, I set the controller's currentDocID property: controller.currentDocID = self.document.objectID; In the controller itself, I lazy load the fetchedResultsController like this: - (NSFetchedResultsController *)fetchedResultsController { if (fetchedResultsController != nil) { return fetchedResultsController; } NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Document" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(objectID != %@)", self.currentDocID]; [fetchRequest setPredicate:predicate]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"dateModified" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [aFetchedResultsController release]; [sortDescriptor release]; [sortDescriptors release]; return fetchedResultsController; } When the fetchedResultsController loads, my app crashes with an unhandled exception: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'keypath objectID not found in entity <NSSQLEntity Document id=1>' It's my understanding that all NSManagedObjects have an objectID, whether temporary or permanent. Is this not the case? Any thoughts?

    Read the article

  • System.Data.SQLite parameterized queries with multiple values?

    - by Rezzie
    I am trying to do run a bulk deletion using parameterized queries. Currently, I have the following code: pendingDeletions = new SQLiteCommand(@"DELETE FROM [centres] WHERE [name] = $name", conn); foreach (string name in selected) pendingDeletions.Parameters.AddWithValue("$name", centre.Name); pendingDeletions.ExecuteNonQuery(); However, the value of the parameter seems to be overwritten each time and I end up just removing the last centre. What is the correct way to execute a parameterized query with a list of values?

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >