Search Results

Search found 415 results on 17 pages for 'transactional'.

Page 7/17 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • transactions in MS Access

    - by arcticpenguin
    Let's say I have the following code in a form that is triggered on some click event. DoCmd.SetWarnings False DoCmd.OpenQuery "AddSomeStuff" DoCmd.OpenQuery "UpdateSomeOtherStuff" DoCmd.OpenQuery "DeleteABunchOfCrap" DoCmd.SetWarnings True Can I assume that the three update queries I executed (in SQL Server) are not transactional in that they are run is separate transactions?

    Read the article

  • Good working habits to observe in project development?

    - by Will Marcouiller
    As my development experience grows, I see fit to stick to best practices from here and there to build somehow my own working practices while observing the conventions, etc. I'm currently working on a project which my goals is to graduate the security access model from an environment's Active Directory to another environment's automatically. I don't know for any of you, but as far as I'm concerned, I meet some real difficulties sticking to only one way, then develop. I mean, I learn something new everyday while visiting SO, and recently wanted to get acquainted with generics. On the other hand, I better know the Façade pattern which proved to be very practical in transactional programming in process systems. This seems to be less practical for desktop application as there are plenty of variables to consider in a desktop application that you don't have to care in transactional programming, as you're playing only with information data. As for my current project, I have: Groups; Organizational Units; Users. Which are all considered an entry in the Active Directory. This points out to be a good candidate for generics, as also approached this way by Bart de Smett's Linq to AD on CodePlex. He has a DirectorySource<T>, and to manage let's say groups, then he instantiate a source with the proper type: var groups = new DirectorySource<Group>(); This seems to be very a good way of doing. Despite, I seem to go from one pattern to another and I don't seem to be able to strictly stick to one. While I'm aware that one must not stay with only one way of doing, since each pattern statisfies certain advantages, while also illustrating disadvantages under some usage conditions, I seem to want to develop with both patterns having a singleton Façade class with the underlying factories which represent the sub systems: GroupsFactory; UsersFactory; OrganizationalUnitsFactory. Each of the factories offers the possible operations for their respective entity (group, user, OU). To make a very long story short, I often have plenty of ideas while developping and this causes me some trouble, as I go from an idea to another feeling completely lost after a while. Yet I understand the advantages and disavantages, I have no trouble choosing from one pattern to another depending on the situation. Nevertheless, when it comes to programming itself, if I'm not part of a team, I feel sometimes like I can't do anything good. That is, because I can't stand not doing something "perfect" the first time. The role I play within the project is both: the project manager and the programmer. I am more comfortable in the project manager role, architectural role, analytical role than the developer's. Has any of you some good habbits to observe in project development? Thanks to you all! =)

    Read the article

  • Rolling back a transaction in a Grails Service

    - by UltraVi01
    I have been updating all my services to be transactional by using Grail's ability to rollback when a RuntimeException is thrown in the service. I have, in most cases, doing this: def domain = new Domain(field: field) if (!domain.save()) { throw new RuntimeException() } Anyways, I wanted to verify that this indeed will rollback the transaction... it got me thinking as to whether at this point it's already been committed.. Also, if not, would setting flush:true change that? I am not very familiar with how Spring/Hibernate does all of this :)

    Read the article

  • Why does Mercurial only have one level of rollback?

    - by Nick Pierpoint
    I understand the restrictions of rollback and the care required in its use (for example, http://www.selenic.com/mercurial/hg.1.html#rollback), but I just wondered why there is only 1 level of rollback. My guess it's a design decision and that the hassle of storing multiple previous transactional states to handle multiple levels of rollback is more trouble than its worth.

    Read the article

  • what is "removing backup files" installation step

    - by mfeingold
    In many windows installers the last step is called "removing backup files". I understand that to provide transactional integrity of the install process some "backup files" could've been created and have to be cleaned up. What I do not understand is why on many occasions this step takes considerably longer than the rest of the installation. Any idea why?

    Read the article

  • Grails - how to save a domain object inside a Service ?

    - by w-
    I have a service and inside one of the functions i'm creating a domain object and trying to save it. when it gets to the save part, i get the error No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here What do i need to do in order to save a domain object inside of a service. everything on the internet makes it look like this should just work....

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Feature in ODI 11.1.1.6: ODI for Big Data

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} By Ananth Tirupattur Starting with Oracle Data Integrator 11.1.1.6.0, ODI is offering a solution to process Big Data. This post provides an overview of this feature. With all the buzz around Big Data and before getting into the details of ODI for Big Data, I will provide a brief introduction to Big Data and Oracle Solution for Big Data. So, what is Big Data? Big data includes: structured data (this includes data from relation data stores, xml data stores), semi-structured data (this includes data from weblogs) unstructured data (this includes data from text blob, images) Traditionally, business decisions are based on the information gathered from transactional data. For example, transactional Data from CRM applications is fed to a decision system for analysis and decision making. Products such as ODI play a key role in enabling decision systems. However, with the emergence of massive amounts of semi-structured and unstructured data it is important for decision system to include them in the analysis to achieve better decision making capability. While there is an abundance of opportunities for business for gaining competitive advantages, process of Big Data has challenges. The challenges of processing Big Data include: Volume of data Velocity of data - The high Rate at which data is generated Variety of data In order to address these challenges and convert them into opportunities, we would need an appropriate framework, platform and the right set of tools. Hadoop is an open source framework which is highly scalable, fault tolerant system, for storage and processing large amounts of data. Hadoop provides 2 key services, distributed and reliable storage called Hadoop Distributed File System or HDFS and a framework for parallel data processing called Map-Reduce. Innovations in Hadoop and its related technology continue to rapidly evolve, hence therefore, it is highly recommended to follow information on the web to keep up with latest information. Oracle's vision is to provide a comprehensive solution to address the challenges faced by Big Data. Oracle is providing the necessary Hardware, software and tools for processing Big Data Oracle solution includes: Big Data Appliance Oracle NoSQL Database Cloudera distribution for Hadoop Oracle R Enterprise- R is a statistical package which is very popular among data scientists. ODI solution for Big Data Oracle Loader for Hadoop for loading data from Hadoop to Oracle. Further details can be found here: http://www.oracle.com/us/products/database/big-data-appliance/overview/index.html ODI Solution for Big Data: ODI’s goal is to minimize the need to understand the complexity of Hadoop framework and simplify the adoption of processing Big Data seamlessly in an enterprise. ODI is providing the capabilities for an integrated architecture for processing Big Data. This includes capability to load data in to Hadoop, process data in Hadoop and load data from Hadoop into Oracle. ODI is expanding its support for Big Data by providing the following out of the box Knowledge Modules (KMs). IKM File to Hive (LOAD DATA).Load unstructured data from File (Local file system or HDFS ) into Hive IKM Hive Control AppendTransform and validate structured data on Hive IKM Hive TransformTransform unstructured data on Hive IKM File/Hive to Oracle (OLH)Load processed data in Hive to Oracle RKM HiveReverse engineer Hive tables to generate models Using the Loading KM you can map files (local and HDFS files) to the corresponding Hive tables. For example, you can map weblog files categorized by date into a corresponding partitioned Hive table schema. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Hive control Append KM you can validate and transform data in Hive. In the below example, two source Hive tables are joined and mapped to a target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} The Hive Transform KM facilitates processing of semi-structured data in Hive. In the below example, the data from weblog is processed using a Perl script and mapped to target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Oracle Loader for Hadoop (OLH) KM you can load data from Hive table or HDFS to a corresponding table in Oracle. OLH is available as a standalone product. ODI greatly enhances OLH capability by generating the configuration and mapping files for OLH based on the configuration provided in the interface and KM options. ODI seamlessly invokes OLH when executing the scenario. In the below example, a HDFS file is mapped to a table in Oracle. Development and Deployment:The following diagram illustrates the development and deployment of ODI solution for Big Data. Using the ODI Studio on your development machine create and develop ODI solution for processing Big Data by connecting to a MySQL DB or Oracle database on a BDA machine or Hadoop cluster. Schedule the ODI scenarios to be executed on the ODI agent deployed on the BDA machine or Hadoop cluster. ODI Solution for Big Data provides several exciting new capabilities to facilitate the adoption of Big Data in an enterprise. You can find more information about the Oracle Big Data connectors on OTN. You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Master Data

    - by david.butler(at)oracle.com
    Let's take a deeper look at what we mean when we talk about 'Master' data. In its most general sense, master data is data that exists in more than one operational application. These are the applications that automate business processes. These applications require significant amounts of data to function correctly.  This includes data about the objects that are involved in transactions, as well as the transaction data itself.  For example, when a customer buys a product, the transaction is managed by a sales application.  The objects of the transaction are the Customer and the Product.  The transactional data is the time, place, price, discount, payment methods, etc. used at the point of sale. Many thousands of transactional data attributes are needed within the application. These important data elements are local to the applications and have no bearing on other applications. Harmonization and synchronization across applications is not necessary. The Customer and Product objects of the transaction also have a large number of attributes. Customer for example, includes hierarchies, hierarchical and matrixed relationships, contacts, classifications, preferences, accounts, identifiers, profiles, and addresses galore for 'ship to', 'mail to'; 'service at'; etc. Dozens of attributes exist for individuals, hundreds for organizations, and thousands for products. This data has meaning beyond any particular application. It exists in many applications and drives the vital cross application enterprise business processes. These are the processes that define and differentiate the organization. At every decision point, information about the objects of the process determines the direction of the process flow. This is the nature of the data that exists in more than one application, and this is why we call it 'master data'. Let me elaborate. Parties Oracle has developed a party schema to model all participants in your daily business operations. It models people, organizations, groups, customers, contacts, employees, and suppliers. It models their accounts, locations, classifications, and preferences.  And most importantly, it models the vast array of hierarchical and matrixed relationships that exist between all the participants in your real world operations.  The model logically separates people and organizations from their relationships and accounts.  This separation creates flexibility unmatched in the industry and accounts for the fact that the Oracle schema for Customers, Suppliers, and Accounts is a true superset of the wide variety of commercial and homegrown customer models in existence. Sites Sites are places where business is conducted. They can be addresses, clusters such as retail malls, locations within a cluster, floors within a building, places where meters are located, rooms on floors, etc.  Fully understanding all attributes of a site is key to many business processes. Attributes such as 'noise abatement policy' at a point of delivery, or the size of an oven in a business kitchen drive day-to-day activities such as delivery schedules or food promotions. Typically this kind of data is siloed in departments and scattered across applications and spreadsheets.  This leads to conflicting information and poor operational efficiencies. Oracle's Global Single Schema can hold all site attributes in one place and enables a single version of authoritative site information across the enterprise. Products and Services The Oracle Global Single Schema also includes a number of entities that define the products and services a company creates and offers for sale. Key entities include Items organized into Catalogs and Price Lists. The Catalog structures provide for the ability to capture different views of a product such as engineering, manufacturing, and service which are based on a unified product model. As a result, designers, manufacturing engineers, purchasers and partners can work simultaneously on a common product definition. The Catalog schema allows for unlimited attributes, combines them into meaningful groups, and maps them to catalog categories to track these different types of information. The model also maps an unlimited number of functional structures for each item. For example, multiple Bills of Material (BOMs) can be constructed representing requirements BOM, features BOM, and packaging BOM for an item. The Catalog model also supports hierarchical information about each item and all standard Global Data Synchronization attributes. Business Processes Utilizing Linked Data Entities Each business entity codified into a centralized master data environment significantly improves the efficiency of the automated business processes that use the consolidated data.  When all the key business entities used by an organization's process are so consolidated, the advantages are multiplied.  The primary reason for business process breakdowns (i.e. data errors across application boundaries) is eliminated. All processes are positively impacted and business process automation is itself automated.  I like to use the "Call to Resolution" business process as an example to help illustrate this important point. It involves call center applications, service applications, RMA applications, transportation applications, inventory applications, etc. Customer, Site, Product and Supplier master data must all be correct and consistent across these applications.  What's more, the data relationships between customer and product, and product and suppliers must be right. This is the minimum quality needed to insure the business process flows without error. But that is not the end of the story. Critical master data attributes such as customer loyalty, profitability, credit worthiness, and propensity to buy can optimize the call center point of contact component of the process. Critical product information such as alternative parts or equivalent products can optimize the resolution selected by the process. A comprehensive understanding of the 'service at' location can help insure multiple trips are avoided in the process. Full supplier information on reliability, delivery delays, and potential alternates can prevent supplier exceptions and play a significant role in optimizing the process.  In other words, these master data attributes enable the optimization of the "Call to Resolution" enterprise business process. Master data supports and guides business process flows. Thus the phrase 'Master Data' is indeed appropriate. MDM is the software that houses, manages, and governs the master data that resides in all applications and controls the enterprise business processes. A complete master data solution takes a data model that holds fully attributed master data entities and their inter-relationships. Oracle has this model. Oracle, with its deep understanding of application data is the logical choice for managing all your master data within the enterprise whether or not your organization actually runs any Oracle Applications.

    Read the article

  • Technical Article: Oracle Magazine Java Developer of the Year Adam Bien on Java EE 6 Simplicity by Design

    - by janice.heiss(at)oracle.com
    Java Champion and Oracle Magazine Java Developer of the Year, Adam Bien, offers his unique perspective on how to leverage new Java EE 6 features to build simple and maintainable applications in a new article in Oracle Magazine. Bien examines different Java EE 6 architectures and design approaches in an effort to help developers build efficient, simple, and maintainable applications.From the article: "Java EE 6 consists of a set of independent APIs released together under the Java EE name. Although these APIs are independent, they fit together surprisingly well. For a given application, you could use only JavaServer Faces (JSF) 2.0, you could use Enterprise JavaBeans (EJB) 3.1 for transactional services, or you could use Contexts and Dependency Injection (CDI) with Java Persistence API (JPA) 2.0 and the Bean Validation model to implement transactions.""With a pragmatic mix of available Java EE 6 APIs, you can entirely eliminate the need to implement infrastructure services such as transactions, threading, throttling, or monitoring in your application. The real challenge is in selecting the right subset of APIs that minimizes overhead and complexity while making sure you don't have to reinvent the wheel with custom code. As a general rule, you should strive to use existing Java SE and Java EE services before expanding your search to find alternatives." Read the entire article here.

    Read the article

  • SQLAuthority News – Download Whitepaper – Understanding and Controlling Parallel Query Processing in SQL Server

    - by pinaldave
    My recently article SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database has received many good comments regarding MAXDOP 1 and MAXDOP 0. I really enjoyed reading the comments as the comments are received from industry leaders and gurus. I was further researching on the subject and I end up on following white paper written by Microsoft. Understanding and Controlling Parallel Query Processing in SQL Server Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them. To review the document, please download the Understanding and Controlling Parallel Query Processing in SQL Server Word document. Note: Above abstract has been taken from here. The real question is what does the parallel queries has made life of DBA much simpler or is it looked at with potential issue related to degradation of the performance? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • SOA Suite 11gR1 Patch Set 2 (PS2) released today!

    - by Demed L'Her
      We just released this morning SOA Suite 11gR1 Patch Set 2 (PS2)! You can download it as usual from: OTN (main platforms only) eDelivery (all platforms)   11gR1 PS2 is delivered as a sparse installer, that is to say that it is meant to be applied on the latest full release (11gR1 PS1). The good part is that it’s great for existing PS1 users who simply need to apply the patch and run the patch assistant – the not so good part is that new users will first need to download PS1. What’s in that release? Bug fixes of course but also several significant new features. Here is a short selection of the most significant features in PS2: Spring component (for native Java extensibility and integration) SOA Partitions (to organize and manage your composites) Direct Binding (for transactional invocations to and from Oracle Service Bus) HTTP binding (for those of you trying to do away with SOAP and looking for simple GET and POST) Resequencer (for ordering out-of-order messages) WS Atomic Transactions (WS-AT) support (for propagation of transactions across heterogeneous environments) Check out the complete list of new features in PS2 for more (including links to the documentation for the above)! But maybe even more importantly we are also releasing Oracle Service Bus 11gR1 and BPM Suite 11gR1 at the same time – all on the same base platform (WebLogic Server 10.3.3)! (NB: it might take a while for all pages and caches to be updated with the new content so if you don’t find what you need today, try again soon!)   Technorati Tags: ps1,11gr1ps2,new release,oracle soa suite,oracle

    Read the article

  • CodePlex Daily Summary for Wednesday, June 09, 2010

    CodePlex Daily Summary for Wednesday, June 09, 2010New Projects.NET Transactional File Manager: Transactional File Manager is a .NET API that supports including file system operations such as file copy, move, delete in a transaction. It's an i...3D World Studio Content Pipeline for Windows Phone 7: This is a port of PhotonicGames' project: http://xna3dws.codeplex.com/releases/view/42994 for the Windows Phone 7 tools (XNA 4.0 CTP).Advanced Script Editor for 3D Rad: Advanced Script Editor makes it easier for 3D Rad coders to write scripts. Developed in C#, it features a functions list, a favourites list, object...Ajax ASP.Net Forum: A fast & lightweight open source free forum developed in ASP.Net 3.5, AJAX, CSS, SQL & Javascript Cache (filter-sort-move through table records at ...Axon: Axon is the home automation system that I will be running in my home. It will be a collection of different technologies and projects, often experim...BigBallz: Projeto de site de Bolões para campeonatos diversos. A princípio pensado para copa do mundo de futebol de 2010BigfootMVC: MVC Framework for DotNetNukeBigfootSQL: A StringBuilder for SQL. BigfootSQL was built with simplicity in mind. It assumes that you are comfortable writing SQL but dislike effort required ...Bxf (Basic XAML Framework): Basic Xaml Framework (Bxf) is a simple, streamlined set of UI components designed to demonstrate the minimum framework functionality required to ma...elZerf - elektronische Zeiterfassung: elektronisches Zeiterfassungsystem im Rahmen der Seminararbeit im Modul Web-Anwendungsprogrammierung.IntoFactories.Net - Samples: Project to host samples created by members of the IntoFactories.NET Team blog.Lanchonete: Sistema para controle de lanchonetes. Medieval Dynasties: Medieval Dynasties is a game written in C# 3.5 and XNA 3.1 at the moment. It is inspired by Crusader Kings, Total War and Civilization.PMMsg: A project to replace the standard messaging client on the Windows Mobile platform. Mainly geared towards Windows Mobile 6.5.3 VGA devices. Also an...PunkPong: PunkPong is an open source "Pong" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses keyboard or mouse. This cross-platform a...Renegade Legion Fighter Calculator: In working on assigning fighters to squadrons, flights, and groups for a campaign, I was struck by the sheer amount of calculations I had to make. ...Sharpotify - Spotify .Net Library: Sharpotify is a Spotify library in C#. It is based in Jotify and SharPot projects. It is not a libspotify wrapper, It is a full .Net Spotify protoc...Silverlight load on demand with MEF: With MEF, a Silverlight control can be split in several packages(xap files). Each package can contain one or more pages and it will download on dem...SOLID by example: Source code examples to undestood solid design principles. Most of them were taken from http://www.lostechies.com/SQL Server 2008 Reporting Services RS.EXE Supporting Forms Authentication: A version of RS.EXE that you can use with Forms Authentication in Native Mode. Use the following arguments to specify credentials (just like Basic ...Stripper: Stripper Remove Diacritics and other unwanted caracter to fabric a more standardized file naming.study: studyUncoverPIC: UncoverPIC is a Silverlight Game strongly inspired to the famous Arcade Game "GalsPanic" (see http://en.wikipedia.org/wiki/Gals_Panic ). It was dev...Unity3D Untitled MMO: Unity3D Untitled MMO FrameworkUnnamedShop: UnnamedShopXBStudio.asp.net.automation: A Unit Testing Automation library for asp.netXBStudio.Web: XBStudio Web ApplicationNew Releases3D World Studio Content Pipeline for Windows Phone 7: Initial Release (0.1): This is the first release of the project, with plenty of hackery and kludges to go around, but it mostly works! Let me know if you hit any bugs.Acies: Acies - Alpha Build 0.0.10: Alpha release. Requires Microsoft XNA Framework Redistributable 3.1 (http://www.microsoft.com/downloads/details.aspx?FamilyID=53867a2a-e249-4560-...Advanced Script Editor for 3D Rad: Advanced Script Editor - Version 2.6: Despite various previous releases on the 3D Rad forum, this is the first release on CodePlex.Ajax ASP.Net Forum: First Release: First Release prior to CodePlex Publish (send to admins)So, it doesn't all finish VERSION: 0.1.2 FEATURES Main Home Where all the Forums (called ...Artist Follower for Microsoft Access: Artist Follower 0.5.1: Artists Follower changes: Just one form to manage artists and links!!!Artist Follower for Microsoft Access: Artists Follower 0.5.0: This is the first release of Artist Follower.ASP.NET MVC SiteMap provider - MvcSiteMapProvider: MvcSiteMapProvider 2.0.0 CTP1: This is a community technology preview of MvcSiteMapProvider version 2.0. It is not backwards compatible with older MvcSiteMapProvider versions. ...B&W Port Scanner: Black`n`White Port Scanner 4.0: Version 4 includes: - Improved vulnerability detection tools - Report Creation - Improved Stability - Much better port information database - Nume...BaseCalendar: BaseControls 1.1: BaseControls 1.1 contains the BaseCalendar ASP.NET control. Changes: Rendering TH by default inside THEAD. Added option (ShowMinNumWeeks) to r...BigfootSQL: BigfootSQL Source Code: BigfootSQL C# Version 01Commerce Server 2009 Orders using Pipelines in a Console Application: ConsoleApplication To PLace Orders: ConsoleApplication To PLace Orders with Commerce Server 2009 foundationCommunity Forums NNTP bridge: Community Forums NNTP Bridge V33: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V34: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...ContainerOne - C# application server: V0.1.2.0: New minor release containing: Integration test component for runtime testing Refactored and cleaned solution files First unit testsExtend SmallBasic: Teaching Extensions v.020: Moved Tortoise.approve to ProgramWindow.TakeScreenShot()fleet It: v0.06 Alpha: v0.06 Alpha - Features Caching implemented for fleets Various Bug fixes Implemented Settings. Resolved logical issue with Getting fleets U...Frotz.NET: Frotz.NET B2: In addition to B1 changes: - Added ZTools to enable debugging view of zcode files - Added rudimentary scroll back buffer. B1 Changes: - Got Adapt...FsObserver: FsObserver 2.0: This is basically the same as FsObserver 1.0 but the "-help" documentation has been cleaned up somewhat and the code has been refactored so that it...GPdotNET - Genetic Programming Tool: GPdotNETv1.0: GPdotNET v.1.0 - more details on http.bhrnjica.wordpress.com/gpdotnetHERB.IQ: Alpha 0.1 Source code release 8: Alpha 0.1 Source code release 8imdb movie downloader: myImdb 0.9.3: myImdb 0.9.3imdb movie downloader: myImdb 0.9.4: myImdb 0.9.4jccc .NET smart framework: jccc .NET smart framework version 1.2010.06.07: jccc .NET smart framework version 1.2010.06.07 added oracle databases supportLongBar: LongBar 2.1 Build 313: - Fixed library and updates to work with updated live services - Options: You can disable shadow nowMDownloader: MDownloader-0.15.17.59623: Fixed FileFactory provider. Improvied postpone policies. Added network request limiter.MediaCoder.NET: MediaCoder.NET v1.0 beta 1.1: Installer for MediaCoder.NET v1.0 beta1.1. It can now convert files with spaces in the path or filename. I have also created filter for the SaveFil...MediaCoder.NET: MediaCoder.NET v1.0 beta 1.1 Source Code: Source Code for MediaCoder.NET v1.0 beta 1.1.mesoBoard: mesoBoard - 0.9.1 beta: Fixed file download permissions Released under the New BSD License.MPCLI: Alpha Release (0.1.0.0): This release has core functionality and is considered a potential candidate for a feature complete release of this library. However, suggestions fo...N2 CMS: 2.0: N2 is a lightweight CMS framework for ASP.NET. It helps professional developers build great web sites that anyone can update. Major Changes (1.5 -...NHTrace: NHTrace-47571: NHTrace-47571NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.125: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NSoup: NSoup 0.2: NSoup 0.2 corresponds to jsoup version 1.1.1. List of changes can be viewed here.Opalis Community Releases: Integration Pack for Data Manipulation: The Integration Pack for Data Manipulation enables you to perform a wider variety of data manipulation tasks as well as aggregate data into common ...Performance Analysis of Logs (PAL) Tool: PAL v2.0 Beta 1: Fixed Counter Sorting: Fixed a minor bug where duplicate counter expression paths were not being removed. Analysis Added: Added LogicalDisk Read/...RoTwee: RoTwee (12.0.0.0): Trial version. 17925 Make it possible to change window sizeSharpotify - Spotify .Net Library: Sharpotify.Library 1.0: Sharpotify Library: Stable release. You can connect with spotify, search, browse (tracks, albums, artists), get a music stream, create and edit you...Silverlight load on demand with MEF: mal.Web.Silverlight.MEF 1.0.0.0: mal.Web.Silverlight.MEF 1.0.0.0sMAPtool: sMAPtool v0.7e (without Maps): + Added: color value expansion bar for hmap (right click to select color scheme) + Added: more complex hmap editing, uses now 4 point bounding rect...SQL Server 2008 Reporting Services RS.EXE Supporting Forms Authentication: Initial release: Enjoy!Stripper: Stripper 0.1.1 (CLi): Stripper Remove Diacritics and other unwanted caracters to fabric a more standardized file naming. Especially French caracter and maybe other lang...Unity3D Untitled MMO: v1: versionUrzaGatherer: UrzaGatherer 2.01a: New version with some minors bugs corrected.VCC: Latest build, v2.1.30608.0: Automatic drop of latest buildVCC: Latest build, v2.1.30608.1: Automatic drop of latest buildWatermarker.NET: 0.1.3811: A newer version with some improvements. I release this as a .zip archive, because settings are added here, so there will be .exe and .config files.Yet Another GPS: Alfa Release: Alfa working releaseMost Popular ProjectsDozer Enterprise Library for .NETEmployee Management SystemWiiMote PhysicsVisualStudio 2010 JavaScript OutliningSpider CompilerConcurrent CacheOil Slick Live FeedsCSUFVGDC Summer JamWinGetSiteMap Utility for DNN Blog ModuleMost Active ProjectsCommunity Forums NNTP bridgepatterns & practices – Enterprise LibraryRhyduino - Arduino and Managed CodejQuery Library for SharePoint Web ServicesRawrNB_Store - Free DotNetNuke Ecommerce Catalog ModuleAndrew's XNA HelpersBlogEngine.NETStyleCopCustomer Portal Accelerator for Microsoft Dynamics CRM

    Read the article

  • OTBI vs. OBIA

    - by PRajkumar
      What are the differences between OTBI and OBIA?   OTBI -- Oracle Transactional Business Intelligence OBIA – Oracle Business Intelligence Applications   OBIA   1. OBIA is the pre-packaged BI Apps that Oracle has provided for several years. It is the data warehouse based Solution 2. It is based on the Universal data warehouse design with different prebuilt adapters that can connect to various source application to bring the     data into the warehouse 3. It allows consolidating the data from various sources to bring them together 4. It provides a library of metrics that help to measure business 5. It provides set of predefined reports and dashboards 6. OBIA works for multiple sources including E-Business Suite, PeopleSoft, JDE, SAP and FUSION Applications    OTBI 1. It is a real time BI 2. There is no warehouse or ETL process for OTBI 3. It is a Fusion Apps only 4. OTBI leveraging the advanced technologies from both BI platform and ADF to enable the online BI queries against database directly 5. OTBI does not have prebuilt dashboards and reports like OBIA   Note: Both OTBI and OBIA are available from same metadata repository. Some of the repository objects are shared between OTBI and OBIA. It was designed to allows to have following configuration:   OTBI Only OBIA Only OTBI and OBIA coexist    Both OTBI and OBIA are accessing Fusion Apps via the ADF

    Read the article

  • The C++ Standard Template Library as a BDB Database (part 1)

    - by Gregory Burd
    If you've used C++ you undoubtedly have used the Standard Template Libraries. Designed for in-memory management of data and collections of data this is a core aspect of all C++ programs. Berkeley DB is a database library with a variety of APIs designed to ease development, one of those APIs extends and makes use of the STL for persistent, transactional data storage. dbstl is an STL standard compatible API for Berkeley DB. You can make use of Berkeley DB via this API as if you are using C++ STL classes, and still make full use of Berkeley DB features. Being an STL library backed by a database, there are some important and useful features that dbstl can provide, while the C++ STL library can't. The following are a few typical use cases to use the dbstl extensions to the C++ STL for data storage. When data exceeds available physical memory.Berkeley DB dbstl can vastly improve performance when managing a dataset which is larger than available memory. Performance suffers when the data can't reside in memory because the OS is forced to use virtual memory and swap pages of memory to disk. Switching to BDB's dbstl improves performance while allowing you to keep using STL containers. When you need concurrent access to C++ STL containers.Few existing C++ STL implementations support concurrent access (create/read/update/delete) within a container, at best you'll find support for accessing different containers of the same type concurrently. With the Berkeley DB dbstl implementation you can concurrently access your data from multiple threads or processes with confidence in the outcome. When your objects are your database.You want to have object persistence in your application, and store objects in a database, and use the objects across different runs of your application without having to translate them to/from SQL. The dbstl is capable of storing complicated objects, even those not located on a continous chunk of memory space, directly to disk without any unnecessary overhead. These are a few reasons why you should consider using Berkeley DB's C++ STL support for your embedded database application. In the next few blog posts I'll show you a few examples of this approach, it's easy to use and easy to learn.

    Read the article

  • Formating Columns in Excel created by af:exportCollectionActionListener

    - by Duncan Mills
    The af:exportCollectionActionListener behavior in ADF Faces Rich client provides a very simple way of quickly dumping out the contents or selected rows in a table or treeTable to Excel. However, that simplicity comes at a price as it pretty much left up to Excel how to format the data. A common use case where you have a problem is that of ID columns which are often long numerics. You probably want to represent this data as a string, Excel however will probably have other ideas and render it as an exponent  - not what you intended. In earlier releases of the framework you could sort of work around this by taking advantage of a bug which would allow you to surround the outputText in question with invisible outputText components which provided formatting hints to Excel. Something like this: <af:column headertext="Some wide label">  <af:panelgrouplayout layout="horizontal">     <af:outputtext value="=TEXT(" visible="false">     <af:outputtext value="#{row.bigNumberValue}" rendered="true"/>    <af:outputtext value=",0)" visible="false">   </af:panelgrouplayout> </af:column> However, this bug was fixed and so it can no longer be used as a trick, the export now ignores invisible columns. So, if you really need control over the formatting there are several alternatives: First the more powerful ADF Desktop Integration (ADFdi) package which allows you to build fully transactional spreadsheets that "pull" the data and can update it. This gives you all the control that might need on formatting but it does need specific Excel Add-ins on the client to work. For more information about ADFdi have a look at this tutorial on OTN. Or you can of course look at BI Publisher or Apache POI if you're happy with output only spreadsheets

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >