Search Results

Search found 85825 results on 3433 pages for 'sql data services'.

Page 107/3433 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • T-SQL Tuesday #006: LOB Data

    - by Adam Machanic
    Just a quick note for those of you who may not have seen Michael Coles's post (and a reminder for the rest of you): The topic of this month's T-SQL Tuesday is LOB data . Get your posts ready; next Tuesday we go big! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Using Extended Events in SQL Server Denali CTP1 to Map out the TransactionLog SQL Trace Event EventSubClass Values

    - by Jonathan Kehayias
    John Samson ( Blog | Twitter ) asked on the MSDN Forums about the meaning/description for the numeric values returned by the EventSubClass column of the TransactionLog SQL Trace Event.  John pointed out that this information is not available for this Event like it is for the other events in the Books Online Topic ( TransactionLog Event Class ), or in the sys.trace_subclass_values DMV.  John wanted to know if there was a way to determine this information.  I did some looking and found...(read more)

    Read the article

  • SQL Server 2012 - Upgrade Whitepaper

    - by JustinL
    Just a short note to mention Microsoft have released the Technical Reference Guide for upgrading to SQL Server 2012. The paper is available for download here: http://tinyurl.com/84xm5b4 There's some interesting details on approaches to upgrade, including features such as high availability, full-text search, service broker and other components (SSIS, SSAS, SSRS). Additionally, there's a (fairly) recent initiative to organise and present TechNet content more easily, there's some useful content (with interesting presentation) at the link below: http://technet.microsoft.com/en-us/library/hh393545.aspx Good luck planning your upgrades, Regards, Justin

    Read the article

  • T-SQL Tuesday: Aggregations in SSIS

    - by andyleonard
    Introduction Jes Borland ( Blog | @grrl_geek ) is hosting this month's T-SQL Tuesday - started by SQLBlog's own Adam Machanic ( Blog | @AdamMachanic ) - and it is about aggregation. I thought I'd show a couple ways to do aggregation using SSIS. The Aggregate Transformation in SSIS The Aggregate transform in SSIS is fast . I built an SSIS package (AggregateScripts.dtsx) with two Data Flow Tasks (Using the Aggregate Transform and Using a Script Component). Using the Aggregate Transform looks like this:...(read more)

    Read the article

  • SQL Server Cast

    - by Derek Dieter
    The SQL Server cast function is the easiest data type conversion function to be used compared to the CONVERT function. It takes only one parameter followed by the AS clause to convert a specified value. A quick example is the following:SELECT UserID_String = CAST(UserID AS varchar(50)) FROM dbo.UserThis example will convert the integer to a character value. [...]

    Read the article

  • SQL Server Begin Try

    - by Derek Dieter
    The try catch methodology of programming is a great innovation for SQL 2005+. The first question you should ask yourself before using Try/Catch should be “why?”. Why am I going to use Try/Catch? Personally, I have found a few uses, however I must say I do fall into the category of not [...]

    Read the article

  • T-SQL Tuesday #14: Resolutions

    - by AaronBertrand
    This month, T-SQL Tuesday is being hosted by freshly minted MVP Jen McCown ( blog | twitter ), and her topic is " Resolutions! " I already gave a rough sort of overview on my goals for 2011 , but I thought I would be able to dig a little deeper with enough relevance to participate. So with that in mind, and with a goal of not setting the bar too high, here are a few of the resolutions I hope to achieve in 2011: To become better at PowerShell Not just because all the cool kids are doing it, but because...(read more)

    Read the article

  • DAX editor for SQL Server

    - by Davide Mauri
    One of the major criticism to DAX is the lack of a decent editor and more in general of a dedicated IDE, like the one we have for T-SQL or MDX. Well, this is no more true. On Codeplex a very interesting an promising Visual Studio 2010 extension has been released by the beginning of November 2011: http://daxeditor.codeplex.com/ Intellisense, Syntax Highlighting and all the typical features offered by Visual Studio are available also for DAX. Right now you have to download the source code and compile it, and that’s it!

    Read the article

  • SQL Server Connections Fall 2011 - Demos

    - by Adam Machanic
    Today is the last day of the annual SQL Server Connections show in Vegas, and I've just completed my third and final talk. (Now off to find a frosty beverage or two.) This year I did three sessions: SQL302: Parallelism and Performance: Are You Getting Full Return on Your CPU Investment? Over the past five years, multi-core processors have made the jump from semi-obscure to commonplace in the data center. While servers with 16, 32, or even 64 cores were once an out-of-reach choice for all except the...(read more)

    Read the article

  • SQL Saturday #146 : Nashua, NH

    - by AaronBertrand
    Today was SQL Saturday #146, put on by Mike Walsh, Jack Corbett, and a host of other volunteers and organizers. Scott and I missed the speaker dinner last night, but we headed up from Rhode Island at 6:00 AM and made a good day of it. We had lots of great conversations with both existing friends and potential customers. After lunch I participated in a panel discussion with Joey D'Antoni and Andrew Kelly, led my Mike. We basically talked about various things DBAs are responsible for - and ultimately...(read more)

    Read the article

  • Two bugs you should be aware of

    - by AaronBertrand
    In the past 24 hours I have come across two bugs that can be quite problematic in certain environments. LPIM issue with SetFileIoOverlappedRange Last night the CSS team posted a blog entry detailing a potential issue with Lock Pages in Memory and Windows' SetFileIoOverlappedRange API. I tweeted about it at the time, but thought it could use a little more treatment. The potential symptoms can vary, but include the following (as quoted from the blog post): Wide ranging in SQL from invalid write location,...(read more)

    Read the article

  • SQL 2012 - MySemanticSearch Demo with Tag Clouds

    - by sqlartist
    Excellent demonstration of the new SQL Server 2012 Semantic Search feature available at http://mysemanticsearch.codeplex.com Just tried it out on a large Business Intelligence related Microsoft Word collection and also the health related DMOZ collection of html files discussed in my previous posts. I have included some screenshots below of each document collection. I have realised that the Tag Cloud may need to be a bit more configurable based on the results of any search term. Business Intelligence...(read more)

    Read the article

  • Speaking at SQL Saturday #39 in NYC!

    - by andyleonard
    I am honored to present Applied SSIS Design Patterns and Introduction to Incremental Loads at SQL Saturday #39 in New York City! If you're there and you read this blog, be sure to stop by and introduce yourself! :{> Andy Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • SQL Server For Each Row Next

    - by Derek Dieter
    It is difficult for me to write this particular article and I’ll tell you why. If you don’t care then just skip down to the example, but here goes anyway. It is very rare that you should have to perform looping in SQL. There are certain situations that do require it, and [...]

    Read the article

  • make-like build tools for data?

    - by miku
    Make is a standard tools for building software. But make decides whether a target needs to be regenerated by comparing file modification times. Are there any proven, preferably small tools that handle builds not for software but for data? Something that regenerates targets not only on mod times but on certain other properties (e.g. completeness). (Or alternatively some paper that describes such a tool.) As illustration: I'd like to automate the following process: get data (e.g. a tarball) from some regularly updated source copy somewhere if it's not there (based e.g. on some filename-scheme) convert the files to different format (but only if there aren't successfully converted ones there - e.g. from a previous attempt - custom comparison routine) for each file find a certain data element and fetch some additional file from say an URL, but only if that hasn't been downloaded yet (decide on existence of file and file "freshness") finally compute something (e.g. word count for something identifiable and store it in the database, but only if the DB does not have an entry for that exact ID yet) Observations: there are different stages each stage is usually simple to compute or implement in isolation each stage may be simple, but the data volume may be large each stage may produce a few errors each stage may have different signals, on when (re)processing is needed Requirements: builds should be interruptable and idempotent (== robust) when interrupted, already processed objects should be reused to speedup the next run data paths should be easy to adjust (simple syntax, nothing new to learn, internal dsl would be ok) some form of dependency graph, that describes the process would be nice for later visualizations should leverage existing programs, if possible I've done some research on make alternatives like rake and have worked a lot with ant and maven in the past. All these tools naturally focus on code and software build, not on data builds. A system we have in place now for a task similar to the above is pretty much just shell scripts, which are compact (and are a ok glue for a variety of other programs written in other languages), so I wonder if worse is better?

    Read the article

  • Oracle Insurance Gets Innovative with Insurance Business Intelligence

    - by nicole.bruns(at)oracle.com
    Oracle Insurance announced yesterday the availability of Oracle Insurance Insight 7.0, an insurance-specific data warehouse and business intelligence (BI) system that transforms the traditional approach to BI by involving business users in the creation and maintenance."Rapid access to business intelligence is essential to compete and thrive in today's insurance industry," said Srini Venkatasantham, vice president, Product Strategy, Oracle Insurance. "The adaptive data modeling approach of Oracle Insurance Insight 7.0, combined with the insurance-specific data model, offers global insurance companies a faster, easier way to get the intelligence they need to make better-informed business decisions." New Features in Oracle Insurance 7.0 include:"Adaptive Data Modeling" via the new warehouse palette: Gives business users the power to configure lines of business via an easy-to-use warehouse palette tool. Oracle Insurance Insight then automatically creates data warehouse elements - such as line-specific database structures and extract-transform-load (ETL) processes -speeding up time-to-value for BI initiatives. Out-of-the-box insurance models or create-from-scratch option: Includes pre-built content and interfaces for six Property and Casualty (P&C) lines. Additionally, insurers can use the warehouse palette to deploy any and all P&C or General Insurance lines of business from scratch, helping insurers support operations in any country.Leverages Oracle technologies: In addition to Oracle Business Intelligence Enterprise Edition, the solution includes Oracle Database 11g as well as Oracle Data Integrator Enterprise Edition 11g, which delivers Extract, Load and Transform (E-L-T) architecture and eliminates the need for a separate transformation server. Additionally, the expanded Oracle technology infrastructure enables support for Oracle Exadata. Martina Conlon, a Principal with Novarica's Insurance practice, and author of Business Intelligence in Insurance: Current State, Challenges, and Expectations says, "The need for continued investment by insurers in business intelligence capabilities is widely understood, and the industry is acting. Arming the business intelligence implementation with predefined insurance specific content, and flexible and configurable technology will get these projects up and running faster."Learn moreTo see a demo of the Oracle Insurance Insight system, click hereTo read the press announcement, click here

    Read the article

  • Social Analytics in your current data

    - by Dan McGrath
    By now everyone is aware of the massive boom in social-networking (Twitter, Facebook, LinkedIn) and obviously a big part of its business model revolves around being able to mine this data to create information that can be used to make money for someone. Gartner has identified 'Social Analytics' as one of the top 10 strategic technologies for 2011. Has anyone looked at their existing data structures to determine if they could extract a social graph and then perform further data mining against this? How does it fit in with your other strategic development strategies? What information are you trying to extract from the data? Take for example, a bank. They could conceivably determine a social graph through account relationships and transactions. Obviously there would be open edges on the graph where funds enter/leave the institute, but that shouldn't detract from the usefulness of the data. I'm looking for actual examples with the answers, as well as why/how they did it. References to other sites will be greatly appreciated. Note: I'm not at all referring to mining data out of actual social networks.

    Read the article

  • The Oldest Big Data Problem: Parsing Human Language

    - by dan.mcclary
    There's a new whitepaper up on Oracle Technology Network which details the use of Digital Reasoning Systems' Synthesys software on Oracle Big Data Appliance.  Digital Reasoning's approach is inherently "big data friendly," as it leverages multiple components of the Hadoop ecosystem.  Moreover, the paper addresses the oldest big data problem of them all: extracting knowledge from human text.   You can find the paper here.   From the Executive Summary: There is a wealth of information to be extracted from natural language, but that extraction is challenging. The volume of human language we generate constitutes a natural Big Data problem, while its complexity and nuance requires a particular expertise to model and mine. In this paper we illustrate the impressive combination of Oracle Big Data Appliance and Digital Reasoning Synthesys software. The combination of Synthesys and Big Data Appliance makes it possible to analyze tens of millions of documents in a matter of hours. Moreover, this powerful combination achieves four times greater throughput than conducting the equivalent analysis on a much larger cloud-deployed Hadoop cluster.

    Read the article

  • Uses of persistent data structures in non-functional languages

    - by Ray Toal
    Languages that are purely functional or near-purely functional benefit from persistent data structures because they are immutable and fit well with the stateless style of functional programming. But from time to time we see libraries of persistent data structures for (state-based, OOP) languages like Java. A claim often heard in favor of persistent data structures is that because they are immutable, they are thread-safe. However, the reason that persistent data structures are thread-safe is that if one thread were to "add" an element to a persistent collection, the operation returns a new collection like the original but with the element added. Other threads therefore see the original collection. The two collections share a lot of internal state, of course -- that's why these persistent structures are efficient. But since different threads see different states of data, it would seem that persistent data structures are not in themselves sufficient to handle scenarios where one thread makes a change that is visible to other threads. For this, it seems we must use devices such as atoms, references, software transactional memory, or even classic locks and synchronization mechanisms. Why then, is the immutability of PDSs touted as something beneficial for "thread safety"? Are there any real examples where PDSs help in synchronization, or solving concurrency problems? Or are PDSs simply a way to provide a stateless interface to an object in support of a functional programming style?

    Read the article

  • LINQ to SQL get grouped MIN with JOIN

    - by Ira Rainey
    I'm having trouble with a LINQ to SQL query getting the min value using Visual Basic. Here's the SQL: SELECT RC.AssetID, MIN(RC.RecCode) AS RecCode, JA.EngineerNote from JobAssetRecCode RC JOIN JobAssets JA ON JA.AssetID = RC.AssetID AND JA.JobID = RC.JobID WHERE RC.InspState = 2 AND RC.RecCode > 0 AND RC.JobID = @JobID GROUP BY RC.AssetID, JA.EngineerNote; I seem to be going around in circles here with grouping etc not managing to get it working. Any help would be greatly appreciated.

    Read the article

  • SQL Server 2008 R2 CTP Installer crashes instantly after starting it

    - by Adrian Grigore
    Hi, I'm trying to update my existing SQL server 2008 SP1 installation with SQL Server 2008 R2 (November CTP). I started the setup and chose the upgrade option and after some time the installer told me to reboot. As soon as I confirmed with OK, it crashed. After rebooting I can't even run the setup file anymore. it crashes instantly without an error message. What's the recommended way of troubleshooting this? Thanks, Adrian

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >