Search Results

Search found 34186 results on 1368 pages for 'single machine'.

Page 446/1368 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • T-SQL Improvements And Data Types in ms sql 2008

    - by Aamir Hasan
     Microsoft SQL Server 2008 is a new version released in the first half of 2008 introducing new properties and capabilities to SQL Server product family. All these new and enhanced capabilities can be defined as the classic words like secure, reliable, scalable and manageable. SQL Server 2008 is secure. It is reliable. SQL2008 is scalable and is more manageable when compared to previous releases. Now we will have a look at the features that are making MS SQL Server 2008 more secure, more reliable, more scalable, etc. in details.Microsoft SQL Server 2008 provides T-SQL enhancements that improve performance and reliability. Itzik discusses composable DML, the ability to declare and initialize variables in the same statement, compound assignment operators, and more reliable object dependency information. Table-Valued ParametersInserts into structures with 1-N cardinality problematicOne order -> N order line items"N" is variable and can be largeDon't want to force a new order for every 20 line itemsOne database round-trip / line item slows things downNo ARRAY data type in SQL ServerXML composition/decomposition used as an alternativeTable-valued parameters solve this problemTable-Valued ParametersSQL Server has table variablesDECLARE @t TABLE (id int);SQL Server 2008 adds strongly typed table variablesCREATE TYPE mytab AS TABLE (id int);DECLARE @t mytab;Parameters must use strongly typed table variables Table Variables are Input OnlyDeclare and initialize TABLE variable  DECLARE @t mytab;  INSERT @t VALUES (1), (2), (3);  EXEC myproc @t;Procedure must declare variable READONLY  CREATE PROCEDURE usetable (    @t mytab READONLY ...)  AS    INSERT INTO lineitems SELECT * FROM @t;    UPDATE @t SET... -- no!T-SQL Syntax EnhancementsSingle statement declare and initialize  DECLARE @iint = 4;Compound Assignment Operators  SET @i += 1;Row constructors  DECLARE @t TABLE (id int, name varchar(20));  INSERT INTO @t VALUES    (1, 'Fred'), (2, 'Jim'), (3, 'Sue');Grouping SetsGrouping Sets allow multiple GROUP BY clauses in a single SQL statementMultiple, arbitrary, sets of subtotalsSingle read pass for performanceNested subtotals provide ever better performanceGrouping Sets are an ANSI-standardCOMPUTE BY is deprecatedGROUPING SETS, ROLLUP, CUBESQL Server 2008 - ANSI-syntax ROLLUP and CUBEPre-2008 non-ANSI syntax is deprecatedWITH ROLLUP produces n+1 different groupings of datawhere n is the number of columns in GROUP BYWITH CUBE produces 2^n different groupingswhere n is the number of columns in GROUP BYGROUPING SETS provide a "halfway measure"Just the number of different groupings you needGrouping Sets are visible in query planGROUPING_ID and GROUPINGGrouping Sets can produce non-homogeneous setsGrouping set includes NULL values for group membersNeed to distinguish by grouping and NULL valuesGROUPING (column expression) returns 0 or 1Is this a group based on column expr. or NULL value?GROUPING_ID (a,b,c) is a bitmaskGROUPING_ID bits are set based on column expressions a, b, and cMERGE StatementMultiple set operations in a single SQL statementUses multiple sets as inputMERGE target USING source ON ...Operations can be INSERT, UPDATE, DELETEOperations based onWHEN MATCHEDWHEN NOT MATCHED [BY TARGET] WHEN NOT MATCHED [BY SOURCE]More on MERGEMERGE statement can reference a $action columnUsed when MERGE used with OUTPUT clauseMultiple WHEN clauses possible For MATCHED and NOT MATCHED BY SOURCEOnly one WHEN clause for NOT MATCHED BY TARGETMERGE can be used with any table sourceA MERGE statement causes triggers to be fired onceRows affected includes total rows affected by all clausesMERGE PerformanceMERGE statement is transactionalNo explicit transaction requiredOne Pass Through TablesAt most a full outer joinMatching rows = when matchedLeft-outer join rows = when not matched by targetRight-outer join rows = when not matched by sourceMERGE and DeterminismUPDATE using a JOIN is non-deterministicIf more than one row in source matches ON clause, either/any row can be used for the UPDATEMERGE is deterministicIf more than one row in source matches ON clause, its an errorKeeping Track of DependenciesNew dependency views replace sp_dependsViews are kept in sync as changes occursys.dm_sql_referenced_entitiesLists all named entities that an object referencesExample: which objects does this stored procedure use?sys.dm_sql_referencing_entities 

    Read the article

  • Build Explorer version 1.1 for Visual Studio Team Explorer is released

    - by terje
    Our free extension to Visual Studio , the folder based Build Explorer Version 1.1 has now been released, and uploaded to the Visual Studio Gallery and Codeplex. We have collected up a few changes and some bugs, as follows: Changes: Queue Default Builds can now be optionally fully enabled, fully disabled or enabled just for leaf nodes (=disabled for folders).  If you got a large number of builds it was pretty scary to be able to launch all of them with just one click.  However, it is nice to avoid having the dialog box up when you want to just run off a single build.  That’s the reasoning between the 3rd choice here. Auto fill-in of the builds at start up and refresh  This was a request that came up a lot, and which was also irritating to us.  When the Team Project is opened, the Build explorer will start by itself and fill up it’s tree. So you don’t need to click the node anymore. There was also quite a bit of flashing when the tree filled up, this has been reduced to just a single top level fill before it collapses the node. The speed of the buildup of the tree has also been increased. The “All Build Definitions” node is now shown on top of the list Login box appeared in certain cross domain situations. This was a fix for the TF30063 authentication problem we had in the beginning.  Hopefully the new code has that fixed properly so that both the login box and the TF30063 are gone forever.  Our testing so far seems to indicate it works.  If anyone gets a real problem here there are two workarounds: 1) Turn off the auto refresh to reduce the issue. If this doesn’t fix it, then 2) please reinstall the former version (go to the codeplex download site if you don’t have it anymore)  Write a comment to this blog post with a description of what happens, and I will send a temporary fix asap. Bug fixes: The folder name matching was case sensitive, so “Application.CI” and “application.CI” created two different folders.  View all builds not shown for leaf odes, and view builds didn’t work in all cases.  There was some inconsistencies here which have been fixed. Partly fixed:  The context menu to queue a new build for disabled builds should be removed, but that was a difficult one, and is still on the list, but the command will not do anything for a disabled build. Using the Queue Default Builds on a folder, and if it had some disabled builds below an error box appeared and ruined the whole experience. As a result of these fixes there has been introduced some new options, as shown below:   The two first settings, the Separator symbol and the options for how to handle Queuing of default builds are set per Team Project, and is stored in the TFS source control under the BuildProcessTemplates folder, with the name Inmeta.VisualStudio.BuildExplorer.Settings.xml The next two settings need some explanations.  They handle the behavior for the auto update of the build folders.  First, these are stored in the local registry per user, at the key HKEY_CURRENT_USER/Software\Inmeta\BuildExplorer. The first option Use Timed Refresh at Startup, if turned off, you will need to click the node as it is done in Version 1.0.  The second option is a timed value, the time after the Build explorer node is created and until the scanning of the Build folders start.  It is assumed that this is enough, and the tests so far indicates this.  If you have very many builds and you see that the explorer don’t get them all, try to increase this value, and of course, notify me of your case, either here or on the Visual Gallery site.

    Read the article

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • RegexClean Transformation

    Use the power of regular expressions to cleanse your data right there inside the Data Flow. This transformation includes a full user interface for simple configuration, as well as advanced features such as error output configuration. Two regular expressions are used, a match expression and a replace expression. The transformation is designed around the named capture groups or match groups, and even supports multiple expressions. This allows for rich and complex expressions to be built, all through an easy to reuse transformation where a bespoke Script Component was previously the only alternative. Some simple properties are available for each column selected – Behaviour The two behaviour modes offer similar functionality but with a difference. Replace, replaces tokens with the input, and Emit overwrites the whole string. Cascade Cascade allows you to define multiple expressions, each on a new line. The match expression will be processed into one operation per line, which are then processed in order at run-time. Multiple replace expressions can also be specified, again each on a new line. If there is no corresponding replace expression for a match expression line, then the last replace expression will be used instead. It is common to have multiple match expressions, but only a single replace expression. Match Expression The expression used to define the named capture groups. This is where you can analyse the data, and tag or name elements within it as found by the match expression. Replace Expression The replace determines the final output. It will reference the named groups from the match expression and assembles them into the final output. If you want to use regular expressions to validate data then try the Regular Expression Transformation. Quick Start Guide Select a column. A new output column is created for each selected column; there is no option for in-place replacement of column values. One input column can be used to populate multiple output columns, just select the column again in the lower grid, using the Input Columns drop-down selector. Amend the output column name and size as required. They default to the same as the input column selected. Amend the behaviour as required, the default is Replace. Amend the cascade option as required, the default is true. Finally enter your match and replace regular expressions Quick Sample #1 Parse an email address and extract the user and domain portions. Format as a web address passing the user portion as a URL parameter. This uses two match groups, user and host, which correspond to the text before the @ and after it respectively. Behaviour is Emit, and cascade of false, we only have a single match expression. Match Expression ^(?<user>[^@]+)@(?<host>.+)$ Replace Expression - http://www.${host}?user=${user} Results Sample Input Sample Output [email protected] http://www.adventure-works.com?user=zheng0 The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the RegexClean Transformation from the list. Downloads The RegexClean Transformation is available for both SQL Server 2005 and SQL Server 2008. Please choose the version to match your SQL Server version, or you can install both versions and use them side by side if you have both SQL Server 2005 and SQL Server 2008 installed. RegexClean Transformation for SQL Server 2005 RegexClean Transformation for SQL Server 2008 Version History SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) Screenshot

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    - by jamiet
    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it. A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project. The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server. In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT: This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE. http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/ Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team: The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine. I, like many others, would rather not have to install full blown Visual Studio and so I asked: Is there any chance you'll ever support any of these scenarios: Installation of all build/deploy pre-requisites without installing the VS shell? TFS shipping with all of the pre-requisites for doing SSDT project build/deploys 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that? Kevin replied again: The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc. Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc. What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that. Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you. So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead. @Jamiet

    Read the article

  • PetaPoco with parameterised stored procedure and Asp.Net MVC

    - by Jalpesh P. Vadgama
    I have been playing with Micro ORMs as this is very interesting things that are happening in developer communities and I already liked the concept of it. It’s tiny easy to use and can do performance tweaks. PetaPoco is also one of them I have written few blog post about this. In this blog post I have explained How we can use the PetaPoco with stored procedure which are having parameters.  I am going to use same Customer table which I have used in my previous posts. For those who have not read my previous post following is the link for that. Get started with ASP.NET MVC and PetaPoco PetaPoco with stored procedures Now our customer table is ready. So let’s Create a simple process which will fetch a single customer via CustomerId. Following is a code for that. CREATE PROCEDURE mysp_GetCustomer @CustomerId as INT AS SELECT * FROM [dbo].Customer where CustomerId=@CustomerId Now  we are ready with our stored procedures. Now lets create code in CustomerDB class to retrieve single customer like following. using System.Collections.Generic; namespace CodeSimplified.Models { public class CustomerDB { public IEnumerable<Customer> GetCustomers() { var databaseContext = new PetaPoco.Database("MyConnectionString"); databaseContext.EnableAutoSelect = false; return databaseContext.Query<Customer>("exec mysp_GetCustomers"); } public Customer GetCustomer(int customerId) { var databaseContext = new PetaPoco.Database("MyConnectionString"); databaseContext.EnableAutoSelect = false; var customer= databaseContext.SingleOrDefault<Customer>("exec mysp_GetCustomer @customerId",new {customerId}); return customer; } } } Here in above code you can see that I have created a new method call GetCustomer which is having customerId as parameter and then I have written to code to use stored procedure which we have created to fetch customer Information. Here I have set EnableAutoSelect=false because I don’t want to create Select statement automatically I want to use my stored procedure for that. Now Our Customer DB class is ready and now lets create a ActionResult Detail in our controller like following using System.Web.Mvc; namespace CodeSimplified.Controllers { public class HomeController : Controller { public ActionResult Index() { ViewBag.Message = "Welcome to ASP.NET MVC!"; return View(); } public ActionResult About() { return View(); } public ActionResult Customer() { var customerDb = new Models.CustomerDB(); return View(customerDb.GetCustomers()); } public ActionResult Details(int id) { var customerDb = new Models.CustomerDB(); return View(customerDb.GetCustomer(id)); } } } Now Let’s create view based on that ActionResult Details method like following. Now everything is ready let’s test it in browser. So lets first goto customer list like following. Now I am clicking on details for first customer and Let’s see how we can use the stored procedure with parameter to fetch the customer details and below is the output. So that’s it. It’s very easy. Hope you liked it. Stay tuned for more..Happy Programming

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Call Webservices&hellip;Maybe!?

    - by MOSSLover
    So I have been doing preliminary work for my iOS talk for a while, but did not get into the meat of the project until recently.  One day I envision my talk uploading pictures from a camera on an iPhone or iPad into SharePoint and telling people how I did it.  As you know with my Silverlight talk and any new technology, building new talks with new technologies always ends up with some pain points that you must jump over just to grab data.  So step 1 always starts out with how do we even access a webservice using the new technology. I started out watching every single SPC video available on oAuth and Rest Webservices in SharePoint 2013.  I also sent an email to Eric Shupps about some REST and 2013 examples.  The videos further confused me, because all the videos were on SharePoint hosted apps (provider and autohosted).  I did not want to create a SharePoint hosted app, but instead a mobile app outside of the SharePoint context altogether.  Nick Swan sent me his code and it was great for a starting point on how the JSON calls would look like on iOS, but I was still missing a piece.  Nick does a great job on showing how to use the REST/JSON calls in a non-MS tech, however his presentation uses the SharePoint context and can grab the SPAppToken.  At this point I had to ask the question how do you grab the SAML token outside of SharePoint 2013 in iOS using Objective-C?  After reading all the MSDN documentation, some documentation on Restkit and Objective-C/oAuth calls, and some SharePoint 2013 blog post my head was swimming.  I was dreaming about REST and iOS in SharePoint 2013.  SAML tokens were taunting me.  I was nowhere near understanding 2013. I started talking to my friend, Pedro Jimenez, who is also playing with Objective-C and went to SPC.  He found me a couple good MSDN posts with REST/JSON calls that basically showed the accessToken was all I needed (at this point I was still thinking iOS needed to be a provider hosted app which is wrong).  So then again I had to ask the SAML token question…How do you get a SAML token outside of SharePoint without the TokenHelper class? So then I started talking to people and thinking why do I need to completely avoid TokenHelper…The solution in concept is basically create a webservice in Azure wrapped into a Provider Hosted App in SharePoint.  Wictor Wilen created a helper webservice in the following blog post: http://www.wictorwilen.se/Post/How-to-do-active-authentication-to-Office-365-and-SharePoint-Online.aspx. So now I have to basically stand up the webservice, the SharePoint app wrapper, and then use Restkit to call the first webservice to grab the token and then the second webservice to pass in the token and grab some SharePoint data.  What this means is that you can no longer just pass credentials into SharePoint webservices and get data back.  You have to pass in a SAML token with every single webservice call to SharePoint.  The theory is that this token is associated with the permissions the app can handle (read, write, whatever).  It seems like a ton of pain and a lot of work, but this is step 1 in my crusade to pull some piece of data into iOS from SharePoint and show people how to do it themselves.  In the upcoming months hopefully I can get halfway to my end goal. Technorati Tags: SharePoint 2013,REST,oAuth,Objective-C,iOS

    Read the article

  • Subterranean IL: Pseudo custom attributes

    - by Simon Cooper
    Custom attributes were designed to make the .NET framework extensible; if a .NET language needs to store additional metadata on an item that isn't expressible in IL, then an attribute could be applied to the IL item to represent this metadata. For instance, the C# compiler uses DecimalConstantAttribute and DateTimeConstantAttribute to represent compile-time decimal or datetime constants, which aren't allowed in pure IL, and FixedBufferAttribute to represent fixed struct fields. How attributes are compiled Within a .NET assembly are a series of tables containing all the metadata for items within the assembly; for instance, the TypeDef table stores metadata on all the types in the assembly, and MethodDef does the same for all the methods and constructors. Custom attribute information is stored in the CustomAttribute table, which has references to the IL item the attribute is applied to, the constructor used (which implies the type of attribute applied), and a binary blob representing the arguments and name/value pairs used in the attribute application. For example, the following C# class: [Obsolete("Please use MyClass2", true)] public class MyClass { // ... } corresponds to the following IL class definition: .class public MyClass { .custom instance void [mscorlib]System.ObsoleteAttribute::.ctor(string, bool) = { string('Please use MyClass2' bool(true) } // ... } and results in the following entry in the CustomAttribute table: TypeDef(MyClass) MemberRef(ObsoleteAttribute::.ctor(string, bool)) blob -> {string('Please use MyClass2' bool(true)} However, there are some attributes that don't compile in this way. Pseudo custom attributes Just like there are some concepts in a language that can't be represented in IL, there are some concepts in IL that can't be represented in a language. This is where pseudo custom attributes come into play. The most obvious of these is SerializableAttribute. Although it looks like an attribute, it doesn't compile to a CustomAttribute table entry; it instead sets the serializable bit directly within the TypeDef entry for the type. This flag is fully expressible within IL; this C#: [Serializable] public class MySerializableClass {} compiles to this IL: .class public serializable MySerializableClass {} For those interested, a full list of pseudo custom attributes is available here. For the rest of this post, I'll be concentrating on the ones that deal with P/Invoke. P/Invoke attributes P/Invoke is built right into the CLR at quite a deep level; there are 2 metadata tables within an assembly dedicated solely to p/invoke interop, and many more that affect it. Furthermore, all the attributes used to specify p/invoke methods in C# or VB have their own keywords and syntax within IL. For example, the following C# method declaration: [DllImport("mscorsn.dll", SetLastError = true)] [return: MarshalAs(UnmanagedType.U1)] private static extern bool StrongNameSignatureVerificationEx( [MarshalAs(UnmanagedType.LPWStr)] string wszFilePath, [MarshalAs(UnmanagedType.U1)] bool fForceVerification, [MarshalAs(UnmanagedType.U1)] ref bool pfWasVerified); compiles to the following IL definition: .method private static pinvokeimpl("mscorsn.dll" lasterr winapi) bool marshal(unsigned int8) StrongNameSignatureVerificationEx( string marshal(lpwstr) wszFilePath, bool marshal(unsigned int8) fForceVerification, bool& marshal(unsigned int8) pfWasVerified) cil managed preservesig {} As you can see, all the p/invoke and marshal properties are specified directly in IL, rather than using attributes. And, rather than creating entries in CustomAttribute, a whole bunch of metadata is emitted to represent this information. This single method declaration results in the following metadata being output to the assembly: A MethodDef entry containing basic information on the method Four ParamDef entries for the 3 method parameters and return type An entry in ModuleRef to mscorsn.dll An entry in ImplMap linking ModuleRef and MethodDef, along with the name of the function to import and the pinvoke options (lasterr winapi) Four FieldMarshal entries containing the marshal information for each parameter. Phew! Applying attributes Most of the time, when you apply an attribute to an element, an entry in the CustomAttribute table will be created to represent that application. However, some attributes represent concepts in IL that aren't expressible in the language you're coding in, and can instead result in a single bit change (SerializableAttribute and NonSerializedAttribute), or many extra metadata table entries (the p/invoke attributes) being emitted to the output assembly.

    Read the article

  • SQL SERVER – List of All the Samples Database Available to Download for FREE

    - by Pinal Dave
    It is pretty much very common to have a sample database for any database product. Different companies keep on improving their product and keep on coming up with innovation in their product. To demonstrate the capability of their new enhancements they need the sample database. Microsoft have various sample database available for free download for their SQL Server Product. I have collected them here in a single blog post. Download an AdventureWorks Database The AdventureWorks OLTP database supports standard online transaction processing scenarios for a fictitious bicycle manufacturer (Adventure Works Cycles). Scenarios include Manufacturing, Sales, Purchasing, Product Management, Contact Management, and Human Resources. Coconut Dal Coconut Dal is a lightweight data access layer, for use in projects where the Entity Framework cannot be used or Microsoft’s Enterprise Library Data Block is unsuitable. Anyone who is handwriting ADO.NET should use a library instead and Coconut Dal might be the answer.  DataBooster – Extension to ADO.NET Data Provider The dbParallel DataBooster library is a high-performance extension to ADO.NET Data Provider, includes two aspects: 1) A slimmed down API encapsulation which simplified the most common data access operations (DbConnection -> DbCommand -> DbParameter -> DbDataReader) into a single class DbAccess, to help application with a clean DAL, avoid over-packing and redundant-copy of data transfer. 2) A booster for writing mass data onto database. Base on a rational utilization of database concurrency and a effective utilization of network bandwidth. Tabular AMO 2012 The sample is made of two project parts. The first part is a library of functions to manage tabular models -AMO2Tabular V2-. The second part is a sample to build a tabular model -AdventureWorks Tabular AMO 2012- using the AMO2Tabular library; the created model is similar to the ‘AdventureWorks Tabular Model 2012. SQL Server Analysis Services Product Samples SQL Server Analysis Services provides, a unified and integrated view of all your business data as the foundation for all of your traditional reporting, online analytical processing (OLAP) analysis, Key Performance Indicator (KPI) scorecards, and data mining. Analysis Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Reporting Services Product Samples This project contains Reporting Services samples released with Microsoft SQL Server product. These samples are in the following five categories: Application Samples, Extension Samples, Model Samples, Report Samples, and Script Samples. If you are interested in contributing Reporting Services samples, please let us know by posting in the developers’ forum. Reporting Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008 R2 PCU1. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Integration Services Product Samples This project contains Integration Services samples released with Microsoft SQL Server product. These samples are in the following two categories: Package Samples and Programming Samples. If you are interested in contributing Integration Services samples, please let us know by posting in the developers’ forum. Integration Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. Windows Azure SQL Reporting Admin Sample The SQLReportingAdmin sample for Windows Azure SQL Reporting demonstrates the usage of SQL Reporting APIs, and manages (add/update/delete) permissions of SQL Reporting users. Windows Azure SQL Reporting ReportViewer-SOAP API usage sample These sample projects demonstrate how to embed a Microsoft ReportViewer control that points to reports hosted on SQL Reporting report servers and how to use SQL Reporting SOAP APIs in your Windows Azure Web application. Enterprise Library 5.0 – Integration Pack for Windows Azure This NuGet package contains a zip file with the source code for the Enterprise Library Integration Pack for Windows Azure.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Sample Database

    Read the article

  • Web Services for Info Explorer Zones

    - by Anthony Shorten
    One of the most interesting uses for XAI and Configurable objects is the exposure of a query portal as a Web Service. Let me illustrate this with an example. Say you have an interface that requires a list of data from a number of product tables. In the past you would have to build a java program to do this with SQL then use an application service but it is now possible with just configuration. The first step in the process is to create the SQL you want to use for the interface. It can be any valid static SQL or use host variables for the WHERE clause (we call that filtered). Once you are happy with the SQL (and it performs acceptably) you can incorporate that SQL into a Info-Explorer Zone. You can use any of the explorer zone types but I typically recommend F1-DE-SINGLE as it supports a single SQL statement with multiple filters (up to 15) as well as hidden filters (up to 5). Hidden filters are typically not displayed in the UI for criteria (remember explorer zones can be used on the user Interface as well) but for web services they can be used as normal filters (this means you can use up to 20 filters all up). Once you are happy with the zone, you now need to define it as a Business Service. We have a generic service called FWLZDEXP which allows a explorer zone to be defined as a Business Service. If you open any Business Service based upon FWLZDEXP you will see some examples. The schema is standard and pretty self explanatory in terms of the structure. The schema pattern looks like this: Zone element - maps to the ZONE_CD element and the default value is the zone name you just created. This links the business service to the zone. Filter elements - You name the filters as you like but the mapField is set to Fx_VALUE where x is the filter number corresponding to the filter element in the zone definition. Hidden filter elements - You name the filters as you like but the mapField is set to Hx_VALUE where x is the filter number corresponding to the hidden filter element in the zone definition. results group - this holds the elements of the result set. Each element in your result set has a tagname and is linked to the COL_VALUE mapField and the row element is lists the SEQNO of the column. This corresponds to the column number in the results set in the zone. An example schema is shown below for the F1-USGRACML zone, which returns the access modes for a user group and application service filters. In the example, the userGroup and applicationService elements are the filters and the rows would contain a list of accessModeDescr. This is just a simple example to illustrate the point. There are lots of examples in the product that you can investigate. One recommendation, to save time, is that you copy the schema from one of the examples to save you typing it from scratch. You can simply modify the tags and other elements to suit your needs. Once the Business Service is defined it can simply be defined as a Web Service by registering an XAI Inbound Service using the Business Service definition as a basis. You now have a Web Service based upon a Info Explorer Zone. This is one of my favorite components as it allows interfaces to be simplified. This will be my last blog entry for this year. I hope you all have a great and safe Christmas and an even greater new year. Next year promises to be an exciting year and I look forward to communicating exciting developments we are working on at the moment as they are released.

    Read the article

  • What's new in ASP.Net 4.5 and VS 2012 - part 2

    - by nikolaosk
    This is the second post in a series of posts titled "What's new in ASP.Net 4.5 and VS 2012".You can have a look at the first post in this series, here. Please find all my posts regarding VS 2012, here. In this post I will be looking into the various new features available in ASP.Net 4.5 and VS 2012.I will be looking into the enhancements in the HTML Editor,CSS Editor and Javascript Editor.In order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 2012 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine.I will work fine in Windows 7 as well so do not worry if you do not have the latest Microsoft operating system.1) Launch VS 2012 and create a new Web Forms application by going to File - >New Web Site - > ASP.Net Web Forms Site.2) Choose an appropriate name for your web site.3) I would like to point out the new enhancements in the CSS editor in VS 2012. In the Solution Explorer in the Content folder and open the Site.cssThen when I try to change the background-color property of the html element, I get a brand new handy color-picker. Have a look at the picture below  Please note that the color-picker shows all the colors that have been used in this website. Then you can expand the color-picker by clicking on the arrows. Opacity is also supported. Have a look at the picture below4) There are also mobile styles in the Site.css .These are based on media queries.Please have a look at another post of mine on CSS3 media queries. Have a look at the picture below In this case when the maximum width of the screen is less than 850px there will be a new layout that will be dictated by these new rules. Also CSS snippets are supported. Have a look at the picture below I am writing a new CSS rule for an image element. I write the property transform and hit tab and then I have cross-browser CSS handling all of the major vendors.Then I simply add the value rotate and it is applied to all the cross browser options.Have a look at the picture below.  I am sure you realise how productive you can become with all these CSS snippets. 5) Now let's have a look at the new HTML editor enhancements in VS 2012You can drag and drop a GridView web server control from the Toolbox in the Site.master file.You will see a smart tag (that was only available in the Design View) that you can expand and add fields, format the web server control.Have a look at the picture below 6) We also have available code snippets. I type <video and then press tab twice.By doing that I have the rest of the HTML 5 markup completed.Have a look at the picture below 7) I have new support for the input tag including all the HTML 5 types and all the new accessibility features.Have a look at the picture below   8) Another interesting feature is the new Intellisense capabilities. When I change the DocType to 4.01 and the type <audio>,<video> HTML 5 tags, Intellisense does not recognise them and add squiggly lines.Have a look at the picture below All these features support ASP.Net Web forms, ASP.Net MVC applications and Web Pages. 9) Finally I would like to show you the enhanced support that we have for Javascript in VS 2012. I have full Intellisense support and code snippets support.I create a sample javascript file. I type If and press tab. I type while and press tab.I type for and press tab.In all three cases code snippet support kicks in and completes the code stack. Have a look at the picture below We also have full Intellisense support.Have a look at the picture below I am creating a simple function and then type some sort of XML like comments for the input parameters. Have a look at the picture below. Then when I call this function, Intellisense has picked up the XML comments and shows the variables data types.Have a look at the picture below Hope it helps!!!

    Read the article

  • OWB 11gR2 &ndash; OLAP and Simba

    - by David Allan
    Oracle Warehouse Builder was the first ETL product to provide a single integrated and complete environment for managing enterprise data warehouse solutions that also incorporate multi-dimensional schemas. The OWB 11gR2 release provides Oracle OLAP 11g deployment for multi-dimensional models (in addition to support for prior releases of OLAP). This means users can easily utilize Simba's MDX Provider for Oracle OLAP (see here for details and cost) which allows you to use the powerful and popular ad hoc query and analysis capabilities of Microsoft Excel PivotTables® and PivotCharts® with your Oracle OLAP business intelligence data. The extensions to the dimensional modeling capabilities have been built on established relational concepts, with the option to seamlessly move from a relational deployment model to a multi-dimensional model at the click of a button. This now means that ETL designers can logically model a complete data warehouse solution using one single tool and control the physical implementation of a logical model at deployment time. As a result data warehouse projects that need to provide a multi-dimensional model as part of the overall solution can be designed and implemented faster and more efficiently. Wizards for dimensions and cubes let you quickly build dimensional models and realize either relationally or as an Oracle database OLAP implementation, both 10g and 11g formats are supported based on a configuration option. The wizard provides a good first cut definition and the objects can be further refined in the editor. Both wizards let you choose the implementation, to deploy to OLAP in the database select MOLAP: multidimensional storage. You will then be asked what levels and attributes are to be defined, by default the wizard creates a level bases hierarchy, parent child hierarchies can be defined in the editor. Once the dimension or cube has been designed there are special mapping operators that make it easy to load data into the objects, below we load a constant value for the total level and the other levels from a source table.   Again when the cube is defined using the wizard we can edit the cube and define a number of analytic calculations by using the 'generate calculated measures' option on the measures panel. This lets you very easily add a lot of rich analytic measures to your cube. For example one of the measures is the percentage difference from a year ago which we can see in detail below. You can also add your own custom calculations to leverage the capabilities of the Oracle OLAP option, either by selecting existing template types such as moving averages to defining true custom expressions. The 11g OLAP option now supports percentage based summarization (the amount of data to precompute and store), this is available from the option 'cost based aggregation' in the cube's configuration. Ensure all measure-dimensions level based aggregation is switched off (on the cube-dimension panel) - previously level based aggregation was the only option. The 11g generated code now uses the new unified API as you see below, to generate the code, OWB needs a valid connection to a real schema, this was not needed before 11gR2 and is a new requirement since the OLAP API which OWB uses is not an offline one. Once all of the objects are deployed and the maps executed then we get to the fun stuff! How can we analyze the data? One option which is powerful and at many users' fingertips is using Microsoft Excel PivotTables® and PivotCharts®, which can be used with your Oracle OLAP business intelligence data by utilizing Simba's MDX Provider for Oracle OLAP (see Simba site for details of cost). I'll leave the exotic reporting illustrations to the experts (see Bud's demonstration here), but with Simba's MDX Provider for Oracle OLAP its very simple to easily access the analytics stored in the database (all built and loaded via the OWB 11gR2 release) and get the regular features of Excel at your fingertips such as using the conditional formatting features for example. That's a very quick run through of the OWB 11gR2 with respect to Oracle 11g OLAP integration and the reporting using Simba's MDX Provider for Oracle OLAP. Not a deep-dive in any way but a quick overview to illustrate the design capabilities and integrations possible.

    Read the article

  • Oracle Social Network Developer Challenge: TEAM Informatics

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Here comes another Oracle Social Network Developer Challenge entry, this one courtesy of TEAM Informatics (@teaminformatics). As their name suggests, their entry was a true team effort, featuring the work of Jon Chartrand, Deepthi Sanikommu, Dmitry Shtulman, Raghavendra Joshi, and Daniel Stitely with Wayne Boerger doing the presentation honors. Speaking of the presentation, Wayne’s laptop wouldn’t project onto the plasma we had in the OTN Lounge, but luckily, Noel (@noelportugal) had his iPad and VGA dongle in his backpack of goodies, so they were able to improvise by using the iPad camera to capture Wayne’s demo and project the video to the plasma. Code will find a way. Anyway, TEAM built Do Over, an integration with Atlassian’s JIRA, coincidentally something I’ve chatted with Rich (@rmanalan) about in the past. The basic idea is simple; integrate JIRA issues with Oracle Social Network to expand and centralize the conversation around issue resolution. In Dmitry’s words: We were able to put together a team on fairly short notice and, after batting a few ideas around, decided to pursue an integration with JIRA, an issue and project tracking tool used in-house at TEAM.  After getting to know WebCenter Social, we saw immediate benefits that a JIRA integration could bring, primarily due to the fact that JIRA only allows assignment of an issue to one person at a time.  Integrating Social would allow collaboration and issue resolution to happen right from the JIRA Issue interface. TEAM tackled a very common pain point among developers, i.e. including everyone who needs to be involved in issue resolution into a single thread. If you’ve ever fixed bugs or participated in that process, you’ll know that not everyone has access to the issue resolution system, which makes consolidating discussion time-consuming and fragmented. Why? Because we typically use email as the tool for collaboration. Oracle Social Network allows for all parties involved to work in a single, private and secure conversation, and through its RESTful Public API, information from external systems like JIRA can be brought in for context. TEAM only had time to address half the solution, but given more time, I’m sure they would have made the integration bidirectional, allowing for relevant commentary to be pushed back to JIRA, closing the loop. Here are some screenshot of their integration. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } When Oracle Social Network is released, TEAM will have something they use internally to work on issues, and maybe they’ll even productize their work and add it to the Atlassian Marketplace so that other JIRA users can benefit from the combination of Oracle Social Network and JIRA. Thanks to everyone at TEAM for participating in our challenge. We hope they had a good experience. Look for the details of the other entries this week. Be sure to check out a full recap from Dmitry over on the TEAM blog.

    Read the article

  • OBIEE 11.1.1 - User Interface (UI) Performance Is Slow With Internet Explorer 8

    - by Ahmed A
    The OBIEE 11g UI is performance is slow in IE 8 and faster in Firefox.  For VPN or WAN users, it takes long time to display links on Dashboards via IE 8. Cause is IE 8 generates many HTTP 304 return calls and this caused the 11g UI slower when compared to the Mozilla FireFox browser. To resolve this issue, you can implement HTTP compression and caching. This is a best practice.Why use Web Server Compression / Caching for OBIEE? Bandwidth Savings: Enabling HTTP compression can have a dramatic improvement on the latency of responses. By compressing static files and dynamic application responses, it will significantly reduce the remote (high latency) user response time. Improves request/response latency: Caching makes it possible to suppress the payload of the HTTP reply using the 304 status code.  Minimizing round trips over the Web to re-validate cached items can make a huge difference in browser page load times. This screen shot depicts the flow and where the compression and decompression occurs: Solution: a. How to Enable HTTP Caching / Compression in Oracle HTTP Server (OHS) 11.1.1.x 1. To implement HTTP compression / caching, install and configure Oracle HTTP Server (OHS) 11.1.1.x for the bi_serverN Managed Servers (refer to "OBIEE Enterprise Deployment Guide for Oracle Business Intelligence" document for details). 2. On the OHS machine, open the file HTTP Server configuration file (httpd.conf) for editing. This file is located in the OHS installation directory.For example: ORACLE_HOME/Oracle_WT1/instances/instance1/config/OHS/ohs13. In httpd.conf file, verify that the following directives are included and not commented out: LoadModule expires_module "${ORACLE_HOME}/ohs/modules/mod_expires.soLoadModule deflate_module "${ORACLE_HOME}/ohs/modules/mod_deflate.so 4. Add the following lines in httpd.conf file below the directive LoadModule section and restart the OHS: Note: For the Windows platform, you will need to enclose any paths in double quotes ("), for example:Alias "/analytics ORACLE_HOME/bifoundation/web/app"<Directory "ORACLE_HOME/bifoundation/web/app"> Alias /analytics ORACLE_HOME/bifoundation/web/app#Pls replace the ORACLE_HOME with your actual BI ORACLE_HOME path<Directory ORACLE_HOME/bifoundation/web/app>#We don't generate proper cross server ETags so disable themFileETag noneSetOutputFilter DEFLATE# Don't compress imagesSetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary<FilesMatch "\.(gif|jpeg|png|js|x-javascript|javascript|css)$">#Enable future expiry of static filesExpiresActive onExpiresDefault "access plus 1 week"     #1 week, this will stops the HTTP304 calls i.e. generated by IE 8Header set Cache-Control "max-age=604800"</FilesMatch>DirectoryIndex default.jsp</Directory>#Restrict access to WEB-INF<Location /analytics/WEB-INF>Order Allow,DenyDeny from all</Location> Note: Make sure you replace above placeholder "ORACLE_HOME" to your correct path for BI ORACLE_HOME.For example: my BI Oracle Home path is /Oracle/BIEE11g/Oracle_BI1/bifoundation/web/app Important Notes: Above caching rules restricted to static files found inside the /analytics directory(/web/app). This approach is safer instead of setting static file caching globally. In some customer environments you may not get 100% performance gains in IE 8.0 browser. So in that case you need to extend caching rules to other directories with static files content. If OHS is installed on separate dedicated machine, make sure static files in your BI ORACLE_HOME (../Oracle_BI1/bifoundation/web/app) is accessible to the OHS instance. The following screen shot summarizes the before and after results and improvements after enabling compression and caching:

    Read the article

  • Big Data – Beginning Big Data – Day 1 of 21

    - by Pinal Dave
    What is Big Data? I want to learn Big Data. I have no clue where and how to start learning about it. Does Big Data really means data is big? What are the tools and software I need to know to learn Big Data? I often receive questions which I mentioned above. They are good questions and honestly when we search online, it is hard to find authoritative and authentic answers. I have been working with Big Data and NoSQL for a while and I have decided that I will attempt to discuss this subject over here in the blog. In the next 21 days we will understand what is so big about Big Data. Big Data – Big Thing! Big Data is becoming one of the most talked about technology trends nowadays. The real challenge with the big organization is to get maximum out of the data already available and predict what kind of data to collect in the future. How to take the existing data and make it meaningful that it provides us accurate insight in the past data is one of the key discussion points in many of the executive meetings in organizations. With the explosion of the data the challenge has gone to the next level and now a Big Data is becoming the reality in many organizations. Big Data – A Rubik’s Cube I like to compare big data with the Rubik’s cube. I believe they have many similarities. Just like a Rubik’s cube it has many different solutions. Let us visualize a Rubik’s cube solving challenge where there are many experts participating. If you take five Rubik’s cube and mix up the same way and give it to five different expert to solve it. It is quite possible that all the five people will solve the Rubik’s cube in fractions of the seconds but if you pay attention to the same closely, you will notice that even though the final outcome is the same, the route taken to solve the Rubik’s cube is not the same. Every expert will start at a different place and will try to resolve it with different methods. Some will solve one color first and others will solve another color first. Even though they follow the same kind of algorithm to solve the puzzle they will start and end at a different place and their moves will be different at many occasions. It is  nearly impossible to have a exact same route taken by two experts. Big Market and Multiple Solutions Big Data is exactly like a Rubik’s cube – even though the goal of every organization and expert is same to get maximum out of the data, the route and the starting point are different for each organization and expert. As organizations are evaluating and architecting big data solutions they are also learning the ways and opportunities which are related to Big Data. There is not a single solution to big data as well there is not a single vendor which can claim to know all about Big Data. Honestly, Big Data is too big a concept and there are many players – different architectures, different vendors and different technology. What is Next? In this 31 days series we will be exploring many essential topics related to big data. I do not claim that you will be master of the subject after 31 days but I claim that I will be covering following topics in easy to understand language. Architecture of Big Data Big Data a Management and Implementation Different Technologies – Hadoop, Mapreduce Real World Conversations Best Practices Tomorrow In tomorrow’s blog post we will try to answer one of the very essential questions – What is Big Data? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Visual Studio 2010 SP1

    - by ScottGu
    Last week we shipped Service Pack 1 of Visual Studio 2010 and the Visual Studio Express Tools.  In addition to bug fixes and performance improvements, SP1 includes a number of feature enhancements.  This includes improved local help support, IntelliTrace support for 64-bit applications and SharePoint, built-in Silverlight 4 Tooling support in the box, unit testing support when targeting .NET 3.5, a new performance wizard for Silverlight, IIS Express and SQL CE Tooling support for web projects, HTML5 Intellisense for ASP.NET, and more.  TFS 2010 SP1 was also released last week, together with a new TFS Project Server Integration Pack and Load Test Feature Pack.  Brian Harry has a good blog post about the TFS updates here. VS 2010 SP1 Download Click here to download and install SP1 for all versions of Visual Studio (including express).  This installer examines what you have installed on your machine, and only downloads the servicing downloads necessary to update them to SP1.  The time it takes to download and update will consequently depend on what all you have installed.  Jon Galloway has a good blog post on tips to speed up the SP1 install by uninstalling unused components. Web Platform Installer Bundles In addition to the core VS 2010 SP1 installer, we have also put together two Web Platform Installer (WebPI) bundles that automate installing SP1 together with additional web-specific components: VS 2010 SP1 WebPI Bundle Visual Web Developer 2010 SP1 WebPI Bundle The above WebPI bundles automate installing: VS 2010/VWD 2010 SP1 ASP.NET MVC 3 (runtime + tools support) IIS 7.5 Express SQL Server Compact Edition 4.0 (runtime + tools support) Web Deployment 2.0 Only the components that are not already installed on your machine will be downloaded when you use the above WebPI bundles.  This means that you can run the WebPI bundle at any time (even if you have already installed SP1 or ASP.NET MVC 3) and not have to worry about wasting time downloading/installing these components again. Earlier this year I did two posts that discussed how to use IIS Express and SQL CE with ASP.NET projects in SP1.  Read the below posts to learn more about how to use them after you run the above bundles: Visual Studio 2010 SP1 and IIS Express Visual Studio 2010 SP1 and SQL CE for ASP.NET The above feature additions work with any web project type – including both ASP.NET Web Forms and ASP.NET MVC. Additional SP1 Notes Two additional notes about VS 2010 SP1: 1) One change we made between RTM and SP1 is that by default Visual Studio now uses software rendering instead of hardware acceleration when running on Windows XP.  We made this change because we’ve seen reports of (often inconsistent) performance issues caused by older video drivers.  Running in software mode eliminates these and delivers consistent speeds.  You can optionally re-enable hardware acceleration with SP1 using Visual Studio’s Tools->Options menu command – we did not remove support for HW acceleration on XP, we simply changed the default setting for it.  Jason Zander has written more details on the change and how to re-enable HW acceleration inside VS here. 2) We have discovered an issue where installing SP1 can cause TSQL intellisense within SQL Server Management Studio 2008 R2 to stop working (typing still works – but intellisense doesn’t show up).  The SQL team is investigating this now and I’ll post an update on how to fix this once more details are known.  Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Silverlight Cream for March 23, 2010 -- #818

    - by Dave Campbell
    In this Issue: Max Paulousky, Jeremy Likness, Mark Tucker, Christian Schormann, Page Brooks, Brad Abrams(-2-), Jeff Wilcox, Unnir, Bea Stollnitz, John Papa and Adam Kinney, and Bill Reiss(-2-). Shoutouts: Ashish Shetty posted his material from his MIX10 presentation: Stepping outside the browser with Silverlight 4 Not Silverlight, but dang useful, Karl Shifflett posted a Visual Studio 2010 XAML Editor IntelliSense Presenter Extension Yavor Georgiev posted his MIX10 material: Two samples from today's MIX talk From SilverlightCream.com: GroupBox Sketching Control for WPF applications Using Blend Max Paulousky creates a GroupBox control for SketchFlow for WPF. He includes a link to an example of doing the same for Silverlight. Sequential Asynchronous Workflows in Silverlight using Coroutines Jeremy Likness' latest post begann with a post on the Silverlight.net forum and Rob Eisenburg's MVVM presentation from MIX10 resulting in the use of Wintellect's PowerThreading library (downloadable), and Coroutines. Windows Phone 7 UI Templates Mark Tucker has been putting a lot of thought into WP7 apps and produced 5 templates for building apps, downloadable in PowerPoint format. He's also looking to discuss this concept. Blend 4: About Path Layout, Part I Christian Schormann has a great tutorial up about Expression Blend 4 and path layout ... this is lots of great info, and it's only part 1! Custom Splash Screen for Windows Phone Page Brooks makes very quick work of showing how to add a splash screen to your WP7 app... very nice, Page! Silverlight 4 + RIA Services - Ready for Business: Exposing Data from Entity Framework Brad Abrams next post in the series is is on pulling your data from wherever it lives, and uses a DomainService to shape it for your Silverlight app. Silverlight 4 + RIA Services - Ready for Business: Consuming Data in the Silverlight Client Brad Abrams then discusses consuming that data in a Silverlight app. Not much code involvement at all.. great ROI :) Building Silverlight 3 and Silverlight 4 applications on a .NET 3.5 build machine Jeff Wilcox talks about building Silverlight 3 and Silverlight 4B both on a .NET 3.5 machine. He then adds in the Toolkit, and even WCF RIA Services. Expression Blend 4 - XAML generation tweaks Unnir demonstrates a few changes to Expression Blend 4 that produce more compact XAML. He's also asking for other examples you'd like to see tightened up. How can I sort a hierarchy? Bea Stollnitz posts plausible solutions to sorting data items at each level of a hierarchical UI, with descriptions of why they don't work, followed by the real deal... Silverlight and WPF. Silverlight Training Course (Silverlight 4) John Papa and Adam Kinney have posted a huge body of work to get us up-to-speed on Silverlight 4 -- a WhitePaper, hands-on labs, and an 8-unit course with 25 accompanying videos... geez... Silverlight game development on Windows Phone 7 Bill Reiss has a post up discussing game development on WP7 in general and then discusses his SilverSprite library, with a link to it. XNA or Silverlight for Windows Phone 7 game development? Bill Reiss next discusses the advantage of using Silverlight or XNA for your WP7 game development, and who better to discuss both? Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SQL SERVER – Solution of Puzzle – Swap Value of Column Without Case Statement

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. I will be not able to cover every single solution which is posted as a comment, however, I would like to for sure cover few interesting entries. However, I am selecting 5 solutions which are different (not necessary they are most optimal or best – just different and interesting). Just for clarity I am involving the original problem statement here. USE tempdb GO CREATE TABLE SimpleTable (ID INT, Gender VARCHAR(10)) GO INSERT INTO SimpleTable (ID, Gender) SELECT 1, 'female' UNION ALL SELECT 2, 'male' UNION ALL SELECT 3, 'male' GO SELECT * FROM SimpleTable GO -- Insert Your Solutions here -- Swap value of Column Gender SELECT * FROM SimpleTable GO DROP TABLE SimpleTable GO Here are the five most interesting and different solutions I have received. Solution by Roji P Thomas UPDATE S SET S.Gender = D.Gender FROM SimpleTable S INNER JOIN SimpleTable D ON S.Gender != D.Gender I really loved the solutions as it is very simple and drives the point home – elegant and will work pretty much for any values (not necessarily restricted by the option in original question ‘male’ or ‘female’). Solution by Aneel CREATE TABLE #temp(id INT, datacolumn CHAR(4)) INSERT INTO #temp VALUES(1,'gent'),(2,'lady'),(3,'lady') DECLARE @value1 CHAR(4), @value2 CHAR(4) SET @value1 = 'lady' SET @value2 = 'gent' UPDATE #temp SET datacolumn = REPLACE(@value1 + @value2,datacolumn,'') Aneel has very interesting solution where he combined both the values and replace the original value. I personally liked this creativity of the solution. Solution by SIJIN KUMAR V P UPDATE SimpleTable SET Gender = RIGHT(('fe'+Gender), DIFFERENCE((Gender),SOUNDEX(Gender))*2) Sijin has amazed me with Difference and Soundex function. I have never visualized that above two functions can resolve the problem. Hats off to you Sijin. Solution by Nikhildas UPDATE St SET St.Gender = t.Gender FROM SimpleTable St CROSS Apply (SELECT DISTINCT gender FROM SimpleTable WHERE St.Gender != Gender) t I was expecting that someone will come up with this solution where they use CROSS APPLY. This is indeed very neat and for sure interesting exercise. If you do not know how CROSS APPLY works this is the time to learn. Solution by mistermagooo UPDATE SimpleTable SET Gender=X.NewGender FROM (VALUES('male','female'),('female','male')) AS X(OldGender,NewGender) WHERE SimpleTable.Gender=X.OldGender As per author this is a slow solution but I love how syntaxes are placed and used here. I love how he used syntax here. I will say this is the most beautifully written solution (not necessarily it is best). Bonus: Solution by Madhivanan Somehow I was confident Madhi – SQL Server MVP will come up with something which I will be compelled to read. He has written a complete blog post on this subject and I encourage all of you to go ahead and read it. Now personally I wanted to list every single comment here. There are some so good that I am just amazed with the creativity. I will write a part of this blog post in future. However, here is the challenge for you. Challenge: Go over 50+ various solutions listed to the simple problem here. Here are my two asks for you. 1) Pick your best solution and list here in the comment. This exercise will for sure teach us one or two things. 2) Write your own solution which is yet not covered already listed 50 solutions. I am confident that there is no end to creativity. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • How Exactly Is One Linux OS “Based On” Another Linux OS?

    - by Jason Fitzpatrick
    When reviewing different flavors of Linux, you’ll frequently come across phrases like “Ubuntu is based on Debian” but what exactly does that mean? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader PLPiper is trying to get a handle on how Linux variants work: I’ve been looking through quite a number of Linux distros recently to get an idea of what’s around, and one phrase that keeps coming up is that “[this OS] is based on [another OS]“. For example: Fedora is based on Red Hat Ubuntu is based on Debian Linux Mint is based on Ubuntu For someone coming from a Mac environment I understand how “OS X is based on Darwin”, however when I look at Linux Distros, I find myself asking “Aren’t they all based on Linux..?” In this context, what exactly does it mean for one Linux OS to be based on another Linux OS? So, what exactly does it mean when we talk about one version of Linux being based off another version? The Answer SuperUser contributor kostix offers a solid overview of the whole system: Linux is a kernel — a (complex) piece of software which works with the hardware and exports a certain Application Programming Interface (API), and binary conventions on how to precisely use it (Application Binary Interface, ABI) available to the “user-space” applications. Debian, RedHat and others are operating systems — complete software environments which consist of the kernel and a set of user-space programs which make the computer useful as they perform sensible tasks (sending/receiving mail, allowing you to browse the Internet, driving a robot etc). Now each such OS, while providing mostly the same software (there are not so many free mail server programs or Internet browsers or desktop environments, for example) differ in approaches to do this and also in their stated goals and release cycles. Quite typically these OSes are called “distributions”. This is, IMO, a somewhat wrong term stemming from the fact you’re technically able to build all the required software by hand and install it on a target machine, so these OSes distribute the packaged software so you either don’t need to build it (Debian, RedHat) or they facilitate such building (Gentoo). They also usually provide an installer which helps to install the OS onto a target machine. Making and supporting an OS is a very complicated task requiring a complex and intricate infrastructure (upload queues, build servers, a bug tracker, and archive servers, mailing list software etc etc etc) and staff. This obviously raises a high barrier for creating a new, from-scratch OS. For instance, Debian provides ca. 37k packages for some five hardware architectures — go figure how much work is put into supporting this stuff. Still, if someone thinks they need to create a new OS for whatever reason, it may be a good idea to use an existing foundation to build on. And this is exactly where OSes based on other OSes come into existence. For instance, Ubuntu builds upon Debian by just importing most packages from it and repackaging only a small subset of them, plus packaging their own, providing their own artwork, default settings, documentation etc. Note that there are variations to this “based on” thing. For instance, Debian fosters the creation of “pure blends” of itself: distributions which use Debian rather directly, and just add a bunch of packages and other stuff only useful for rather small groups of users such as those working in education or medicine or music industry etc. Another twist is that not all these OSes are based on Linux. For instance, Debian also provide FreeBSD and Hurd kernels. They have quite tiny user groups but anyway. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Unreal Tournament 3 vs UDK: What Should I Choose?

    - by Matt Christian
    Many people in the mod community were very excited to see the release of the Unreal Developer Kit (UDK) a few months ago.  Along with generating excitement into a very dedicated community, it also introduced many new modders into a flourishing area of indie-development.  However, since UDK is free, most beginners jump right into UDK, which is OK though you might just benefit more from purchasing a shelf-copy of Unreal Tournament 3. UDK UDK is a free full version of UnrealEd (the editor environment used to create games like Gears of War 1/2, Bioshock 1/2, and of course Unreal Tournament 3).  The editor gives you all the features of the editor from the shelf-copy of the game plus some refinements in many of the tools.  (One of the first things you'll find about UnrealEd is that it's a collection of tools grouped into the same editor so it really isn't a single 'tool') Interestingly enough, Epic is allowing you to sell any game made in UDK with a few catches.  First off, you must purchase a liscense for your game (which, I THINK is aproximately $99 starting).  Secondly, you must pay 25% of all profits for the first $5,000 of your game revenue to them (about $1250).  Finally, you cannot use any of the 'media' provided in UDK for your game.  UDK provides sample meshes, textures, materials, sounds, and other sample pieces of media pulled (mostly) from Unreal Tournament 3. The final point here will really determine whether you should use UDK.  There is a very small amount of media provided in UDK for someone to go in and begin creating levels without first developing your own meshes, textures, and other media.  Sure, you can slap together a few unique levels, though you will end up finding yourself restriced to the same items over and over and over.  This is absolutely how professional game development is; you are 'given' (typically liscensed or built in-house) an engine/editor and you begin creating all the content for the game and placing it.  UDK is aimed toward those who really want to build their game content from scratch with a currently existing engine.  It is not suited for someone who would like to simply build levels and quick mods without learning external 3D programs and image editing software. Unreal Tournament 3 Unless you have a serious grudge against FPS's, Epic, or your computer sucks, there really is no reason not to own this game for PC.  You can pick it up on Steam or Amazon for around $20 brand new.  Not only are you provided with a full single-player and multiplayer game, but you are given the entire UnrealEd 3.0 including all of the content used to build UT3.  If you want to start building levels and mods quickly for UT3, you should absolutely pick up a shelf-copy. However, as off-the-shelf UT3 is a few years old now, the tools have not been updated for quite a while.  Compared to UDK, the menus are more difficult to navigate through and take more time getting used to.  Since UDK is updated almost every month, there are new inclusions to the editor that may not be in UT3 (including the future addition of 3D!).  I haven't worked enough with shelf UT3 to see if there are more features in UDK or if they both feature the same stuff in different forms, however you should remember that the Unreal Engine 3.0 has undergone numerous upgrades between it's launch and Gears of War 2 (in fact, Epic had a conference to show off what changed just between the Gears of Wars games). Since UT3 has much more core content, someone who wants to focus on level editing or modding the core UT3 game may find their needs better suited with an off-the-shelf copy of UT3.  If that level designer has a team that is generating custom assets, they may be better off with UDK. The choice is now yours...

    Read the article

  • Prepping the Raspberry Pi for Java Excellence (part 1)

    - by HecklerMark
    I've only recently been able to begin working seriously with my first Raspberry Pi, received months ago but hastily shelved in preparation for JavaOne. The Raspberry Pi and other diminutive computing platforms offer a glimpse of the potential of what is often referred to as the embedded space, the "Internet of Things" (IoT), or Machine to Machine (M2M) computing. I have a few different configurations I want to use for multiple Raspberry Pis, but for each of them, I'll need to perform the following common steps to prepare them for their various tasks: Load an OS onto an SD card Get the Pi connected to the network Load a JDK I've been very happy to see good friend and JFXtras teammate Gerrit Grunwald document how to do these things on his blog (link to article here - check it out!), but I ran into some issues configuring wi-fi that caused me some needless grief. Not knowing if any of the pitfalls were caused by my slightly-older version of the Pi and not being able to find anything specific online to help me get past it, I kept chipping away at it until I broke through. The purpose of this post is to (hopefully) help someone else recognize the same issues if/when they encounter them and work past them quickly. There is a great resource page here that covers several ways to get the OS on an SD card, but here is what I did (on a Mac): Plug SD card into reader on/in Mac Format it (FAT32) Unmount it (diskutil unmountDisk diskn, where n is the disk number representing the SD card) Transfer the disk image for Debian to the SD card (dd if=2012-08-08-wheezy-armel.img of=/dev/diskn bs=1m) Eject the card from the Mac (diskutil eject diskn) There are other ways, but this is fairly quick and painless, especially after you do it several times. Yes, I had to do that dance repeatedly (minus formatting) due to the wi-fi issues, as it kept killing the ability of the Pi to boot. You should be able to dramatically reduce the number of OS loads you do, though, if you do a few things with regard to your wi-fi. Firstly, I strongly recommend you purchase the Edimax EW-7811Un wi-fi adapter. This adapter/chipset has been proven with the Raspberry Pi, it's tiny, and it's cheap. Avoid unnecessary aggravation and buy this one! Secondly, visit this page for a script and instructions regarding how to configure your new wi-fi adapter with your Pi. Here is the rub, though: there is a missing step. At least for my combination of Pi version, OS version, and uncanny gift of timing and luck there was. :-) Here is the sequence of steps I used to make the magic happen: Plug your newly-minted SD card (with OS) into your Pi and connect a network cable (for internet connectivity) Boot your Pi. On the first boot, do the following things: Opt to have it use all space on the SD card (will require a reboot eventually) Disable overscan Set your timezone Enable the ssh server Update raspi-config Reboot your Pi. This will reconfigure the SD to use all space (see above). After you log in (UID: pi, password: raspberry), upgrade your OS. This was the missing step for me that put a merciful end to the repeated SD card re-imaging and made the wi-fi configuration trivial. To do so, just type sudo apt-get upgrade and give it several minutes to complete. Pour yourself a cup of coffee and congratulate yourself on the time you've just saved.  ;-) With the OS upgrade finished, now you can follow Mr. Engman's directions (to the letter, please see link above), download his script, and let it work its magic. One aside: I plugged the little power-sipping Edimax directly into the Pi and it worked perfectly. No powered hub needed, at least in my configuration. To recap, that OS upgrade (at least at this point, with this combination of OS/drivers/Pi version) is absolutely essential for a smooth experience. Miss that step, and you're in for hours of "fun". Save yourself! I'll pick up next time with more of the Java side of the RasPi configuration, but as they say, you have to cross the moat to get into the castle. Hopefully, this will help you do just that. Until next time! All the best, Mark 

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >