Search Results

Search found 10594 results on 424 pages for 'vizioz umbraco blog'.

Page 125/424 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Hardware Virtualization no longer required for Windows 7 XP Mode

    - by Jonathan Kehayias
    One of my frustrations in upgrading to Windows 7 last year was that Virtual PC no longer worked since I didn’t have Hardware Virtualization on my CPU.  This really drove my transition entirely to VMware Workstation on my personal laptop.  I recently reinstalled my work laptop (with permission) on Windows 7 Enterprise and figured I’d give XP Mode a look since this machine has Hardware Virtualization enabled.  I was surprised to find that Hardware Virtualization was no longer required,...(read more)

    Read the article

  • New White Papers Available

    - by mattande
    New Master Data Services white papers are now available on MSDN. For an application-agnostic overview of Master Data Management, see Organizational Approaches to Master Data Management . For the steps needed to configure Master Data Services to work with a SharePoint workflow, see SharePoint Workflow Integration with Master Data Services . Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Working with Reporting Services Filters–Part 5: OR Logic

    - by smisner
    When you combine multiple filters, Reporting Services uses AND logic. Once upon a time, there was actually a drop-down list for selecting AND or OR between filters which was very confusing to people because often it was grayed out. Now that selection is gone, but no matter. It wouldn’t help us solve the problem that I want to describe today. As with many problems, Reporting Services gives us more than one way to apply OR logic in a filter. If I want a filter to include this value OR that value for the same field, one approach is to set up the filter is to use the IN operator as I explained in Part 1 of this series. But what if I want to base the filter on two different fields? I  need a different solution. Using the AdventureWorksDW2008R2 database, I have a report that lists product sales: Let’s say that I want to filter this report to show only products that are Bikes (a category) OR products for which sales were greater than $1,000 in a year. If I set up the filter like this: Expression Data Type Operator Value [Category] Text = Bikes [SalesAmount]   > 1000 Then AND logic is used which means that both conditions must be true. That’s not the result I want. Instead, I need to set up the filter like this: Expression Data Type Operator Value =Fields!EnglishProductCategoryName.Value = "Bikes" OR Fields!SalesAmount.Value > 1000 Boolean = =True The OR logic needs to be part of the expression so that it can return a Boolean value that we test against the Value. Notice that I have used =True rather than True for the value. The filtered report appears below. Any non-bike product appears only if the total sales exceed $1,000, whereas Bikes appear regardless of sales. (You can’t see it in this screenshot, but Mountain-400-W Silver, 38 has sales of $923 in 2007 but gets included because it is in the Bikes category.)

    Read the article

  • SQL Saturday #156 : Providence, RI

    - by AaronBertrand
    Well, East Greenwich, RI. Another successful event, this one put on by John Miner, Brandon Leach, Steve Simon, Scott Abrants and a host of other folks. Several #SQLFamily friends in attendance as well: Grant Fritchey, Mike Walsh, Jack Corbett, Wayne Sheffield and others. I gave a session in the morning and then a session to cap off the day. Thanks to everyone who attended! The downloads are here: T-SQL : Bad Habits & Best Practices The Ins & Outs of Contained Databases...(read more)

    Read the article

  • In SQLCMD mode, should CONNECT be an implicit batch separator?

    - by Greg Low
    Hi Folks, I've been working with SQLCMD mode again today and one thing about it always bites me. If I execute a script like: ::CONNECT SERVER1 SELECT @@VERSION; ::CONNECT SERVER2 SELECT @@VERSION; ::CONNECT SERVER3 SELECT @@VERSION; I'm sure I'm not the only person that would be surprised to see all three SELECT commands executed against SERVER3 and none executed against SERVER1 or SERVER2. If you think that's odd behavior, here's where to vote: https://connect.microsoft.com/SQLServer/feedback/details/611144/sqlcmd-connect-to-a-different-server-should-be-an-implicit-batch-separator#detail...(read more)

    Read the article

  • Meet @marcorus and @ferrarialberto at TechEd Europe 2012 #tee2012

    - by Marco Russo (SQLBI)
    I and Alberto are in Amsterdam this week at TechEd Europe 2012. If you are here at the conference, you can meet us here: Wed, Jun 27 10:15 AM - 11:30 AM – Room G106 DBI319 - BISM: Multidimensional vs. Tabular Wed, Jun 27 02:15 PM – 02:30 PM – Microsoft Press Booth in the TechExpo area PowerPivot for Excel 2010 Book Signing Thu, Jun 28 8:30 AM - 9:45 AM – Room E107 Many-to-Many Relationships in BISM Tabular Fri, Jun 29 1:00 PM - 2:45 PM – Breakthrough Insight at Microsoft SQL Server Booth – TechExpo area Staff and Q&A We’ll try to visit the Microsoft Booth very often and we’ll be in the area Breakthrough Insight of SQL Server zone (see the picture to identify it). And don’t miss the PowerPivot for Excel 2010 book signing event:

    Read the article

  • Analyzing the errorlog

    - by TiborKaraszi
    How often do you do this? Look over each message (type) in the errorlog file and determine whether this is something you want to act on. Sure, some (but not all) of you have some monitoring solution in place, but are you 100% confident that it really will notify for all messages that you might find interesting? That there isn't even one little message hiding in there that you would find valuable knowing about? Or how about messages that you typically don't are about, but knowing that you have a high...(read more)

    Read the article

  • Existing Instance, Shiny New Disks

    - by merrillaldrich
    Migrating an Instance of SQL Server to New Disks I get to do something pretty entertaining this week – migrate SQL instances on a 2008 cluster from one disk array to another! Zut alors! I am so excited I can hardly contain myself, so let’s get started. (Only a DBA could love this stuff, am I right? I know.) Anyway, here’s one method of many to migrate your data. Assumption : this is a host-based migration, which just means I’m using the Windows file system to push the data from one set of SAN disks...(read more)

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • SQL vs. Oracle Live Debate (AKA Smackdown!)

    - by Peter W. DeBetta
    A few years ago I was speaking at a conference in Raleigh, NC where Ted Neward and I found a fun way to promote a Java vs. .NET debate that was planned one evening. We stood in the middle of a crowd during one of the breaks and starting “arguing” about Java vs. .NET with one another. Our voice levels quickly raised and we ended it by slapping each other across the face with a glove to request a challenge. It was a great way to segue to our announcing of the actual debate planned later that evening....(read more)

    Read the article

  • Bleeding Edge 2012 – session material

    - by Hugo Kornelis
    As promised, here are the slide deck and demo code I used for my presentation at the Bleeding Edge 2012 conference in Laško, Slovenia. Okay, I promised to have them up by Tuesday or Wednesday at worst, and it is now Saturday – my apologies for the delay. Thanks again to all the attendees of my session. I hope you enjoyed it, and if you have any question then please don’t hesitate to get in touch with me. I had a great time in Slovenia, both during the event and in the after hours. Even if everything...(read more)

    Read the article

  • Backup Compression - time for an overhaul

    - by jchang
    Database backup compression is incredibly useful and valuable. This became popular with then Imceda (later Quest and now Dell) LiteSpeed. SQL Server version 2008 added backup compression for Enterprise Edition only. The SQL Server EE native backup feature only allows a single compression algorithm, one that elects for CPU efficiency over the degree of compression achieved. In the long ago past, this strategy was essential. But today the benefits are irrelevant while the lower compression is becoming...(read more)

    Read the article

  • Social media and special characters

    - by John Paul Cook
    I’ve previously blogged about using Unicode with T-SQL to put superscripts, subscripts, and special characters into text strings. Unicode is also useful in formatting social media such as Facebook, Twitter, and that dinosaur otherwise known as email. When you can’t set properties of text such as italicizing the subject line of an email message or adding subscripts to a Facebook post, Unicode can make it possible. There are Unicode characters that are intrinsically italicized. Others are intrinsically...(read more)

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    Last Updated: 02/01/2011 It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): link   Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: link Microsoft Visual C# 2008 Step by Step: link  c The first place to start is at the official site for the certification. link c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform: link Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform (2nd mention) Links: My Notes:

    Read the article

  • Excel 2013 Data Explorer and GeoFlow make 3-D maps quick and easy

    - by John Paul Cook
    Excel add-ins Data Explorer and GeoFlow work well together, mainly because they just work. Simple, fast, and powerful. I started Excel 2013, used Data Explorer to search for, examine, and then download latitude-longitude data and finally used GeoFlow to plot an interactive 3-D visualization. I didn’t use any fancy Excel commands and the entire process took less than 3 minutes. You can download the GeoFlow preview from here . It can also be used with Office 365. Start by clicking the DATA EXPLORER...(read more)

    Read the article

  • Rounding functions in DAX

    - by Marco Russo (SQLBI)
    Today I prepared a table of the many rounding functions available in DAX (yes, it’s part of the book we’re writing), so that I have a complete schema of the better function to use, depending on the round operation I need to do. Here is the list of functions used and then the results shown for a relevant set of values. FLOOR = FLOOR( Tests[Value], 0.01 ) TRUNC = TRUNC( Tests[Value], 2 ) ROUNDDOWN = ROUNDDOWN( Tests[Value], 2 ) MROUND = MROUND( Tests[Value], 0.01 ) ROUND = ROUND( Tests[Value], 2 )...(read more)

    Read the article

  • How much to charge for Wordpress installation?

    - by Jack Duluoz
    I know this isn't properly a technical question but I hope this is ok here. The question is simple: how much should I charge a customer for a Wordpress installation & configuration? Configuration simply means I have to install him a theme (which is not provided by me), various plugins and maybe edit some lines of code here and there to make the whole thing work fine. MORE INFO I don't do this for a living, I'm just doing this for this single customer. He told me he wants to customize some features of the blog which I think will require a bit of code editing, but these will be small modifications, because I already told him that more substantial modifications will be billed separately. I don't know exactly how long will this take, but probably just 1 day for the setup and some more days to adapt the blog to the customer requests which will eventually come up later

    Read the article

  • SSIS Dashboard 0.5.2 and Live Demo Website

    - by Davide Mauri
    In the last days I’ve worked again on the SQL Server Integration Service Dashboard and I did some updates: Beta Added support for "*" wildcard in project names. Now you can filter a specific project name using an url like: http://<yourserver>/project/MyPro* Added initial support for Package Execution History. Just click on a package name and you'll see its latest 15 executions and I’ve also created a live demo website for all those who want to give it a try before downloading and using it: http://ssis-dashboard.azurewebsites.net/

    Read the article

  • Printing PowerPoint slides in black and white

    - by John Paul Cook
    When I do SQL Server training, sometimes students want to print all of the PowerPoint slides and use them for note taking during class. For such purposes, the background is usually better off being suppressed. This is most efficiently done by changing Print Settings as shown below: Personally I recommend that people take notes directly in the slides instead of printing them. PowerPoint has a notes area. If you do want to print slides and notes, once again use the Print Settings to specify this:...(read more)

    Read the article

  • What causes Multi-Page allocations?

    - by SQLOS Team
    Writing about changes in the Denali Memory Manager In his last post Rusi mentioned: " In previous SQL versions only the 8k allocations were limited by the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained." In SQL Server versions before Denali single page allocations and multi-Page allocations are handled by different components, the Single Page Allocator (which is responsible for Buffer Pool allocations and governed by 'max server memory') and the Multi-Page allocator (MPA) which handles allocations of greater than an 8K page. If there are many multi-page allocations this can affect how much memory needs to be reserved outside 'max server memory' which may in turn involve setting the -g memory_to_reserve startup parameter. We'll follow up with more generic articles on the new Memory Manager structure, but in this post I want to clarify what might cause these larger allocations. So what kinds of query result in MPA activity? I was asked this question the other day after delivering an MCM webcast on Memory Manager changes in Denali. After asking around our Dev team I was connected to one of our test leads Sangeetha who had tested the plan cache, and kindly provided this example of an MPA intensive query: A workload that has stored procedures with a large # of parameters (say > 100, > 500), and then invoked via large ad hoc batches, where each SP has different parameters will result in a plan being cached for this “exec proc” batch. This plan will result in MPA.   Exec proc_name @p1, ….@p500 Exec proc_name @p1, ….@p500 . . . Exec proc_name @p1, ….@p500 Go   Another workload would be large adhoc batches of the form: Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) … Go  In Denali all page allocations are handled by an "Any size page allocator" and included in 'max server memory'. The buffer pool effectively becomes a client of the any size page allocator, which in turn relies on the memory manager. - Guy Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Did You Know: What do you know that isn't so?

    - by Kalen Delaney
    You know what they say… it's not what you don't know that will hurt you, it's what you know that isn't so! In other words, your misconceptions. Or, as Paul Nielson calls them in his SQL Server Bible … MYTHconceptions. Some misconceptions come from misunderstanding of complex information, or from misinterpreting your own results, and assuming we can generalize behavior from one particular situation. Since I teach advanced classes to students with lots of SQL Server experience, I actually see a lot...(read more)

    Read the article

  • Printing PowerPoint slides in black and white

    - by John Paul Cook
    When I do SQL Server training, sometimes students want to print all of the PowerPoint slides and use them for note taking during class. For such purposes, the background is usually better off being suppressed. This is most efficiently done by changing Print Settings as shown below: Personally I recommend that people take notes directly in the slides instead of printing them. PowerPoint has a notes area. If you do want to print slides and notes, once again use the Print Settings to specify this:...(read more)

    Read the article

  • SQL Server 2008 R2: StreamInsight changes at RTM: AdvanceTimeSettings

    - by Greg Low
    For those that have worked with the earlier versions of the simulator that Bill Chesnut and I constructed for the Metro content (the Highway Simulator), changes are also required to how AdvanceTimeSettings are specified. The AdapterAdvanceTimeSettings value is now generated by binding an AdvanceTimeGenerationSettings (that is based on your adapter configuration) with an AdvanceTimePolicy setting. public class TollPointInputFactory : ITypedInputAdapterFactory < TollPointInputConfig >, ITypedDeclareAdvanceTimeProperties...(read more)

    Read the article

  • Messages do not always appear in [catalog].[event_messages] in the order that they occur [SSIS]

    - by jamiet
    This is a simple heads up for anyone doing SQL Server Integration Services (SSIS) development using SSIS 2012. Be aware that messages do not always appear in [catalog].[event_messages] in the order that they occur, observe… In the following query I am looking at a subset of messages in [catalog].[event_messages] and ordering them by [event_message_id]: SELECT [event_message_id],[event_name],[message_time],[message_source_name]FROM   [catalog].[event_messages] emWHERE  [event_message_id] BETWEEN 290972 AND 290982ORDER  BY [event_message_id] ASC--ORDER BY [message_time] ASC Take a look at the two rows that I have highlighted, note how the OnPostExecute event for “Utility GetTargetLoadDatesPerETLIfcName” appears after the OnPreExecute event for “FELC Loop over TargetLoadDates”, I happen to know that this is incorrect because “Utility GetTargetLoadDatesPerETLIfcName” is a package that gets executed by an Execute Package Task prior to the For Each Loop “FELC Loop over TargetLoadDates”: If we order instead by [message_time] then we see something that makes more sense: SELECT [event_message_id],[event_name],[message_time],[message_source_name]FROM   [catalog].[event_messages] emWHERE  [event_message_id] BETWEEN 290972 AND 290982--ORDER BY [event_message_id] ASCORDER  BY [message_time] ASC We can see that the OnPostExecute for “Utility GetTargetLoadDatesPerETLIfcName” did indeed occur before the OnPreExecute event for “FELC Loop over TargetLoadDates”, they just did not get assigned an [event_message_id] in chronological order. We can speculate as to why that might be (I suspect the explanation is something to do with the two executables appearing in different packages) but the reason is not the important thing here, just be aware that you should be ordering by [message_time] rather than [event_message_id] if you want to get 100% accurate insights into your executions. @Jamiet

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >