Search Results

Search found 10442 results on 418 pages for 'blog'.

Page 93/418 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • White Paper on Analysis Services Tabular Large-scale Solution #ssas #tabular

    - by Marco Russo (SQLBI)
    Since the first beta of Analysis Services 2012, I worked with many companies designing and implementing solutions based on Analysis Services Tabular. I am glad that Microsoft published a white paper about a case-study using one of these scenarios: An Analysis Services Case Study: Using Tabular Models in a Large-scale Commercial Solution. Alberto Ferrari is the author of the white paper and many people contributed to it. The final result is a very technical document based on a case study, which provides a level of detail that I don’t see often in other case studies (which are usually more marketing-oriented). This white paper has the following structure: Requirements (data model, capacity planning, client tool) Options considered (SQL Server Columnstore Indexes, SSAS Multidimensional, SSAS Tabular) Data Model optimizations (memory compression, query performance, scalability) Partitioning and Processing strategy for near real-time latency Hardware selection (NUMA analysis, Azure VM tests) Scalability tests (estimation of maximum users per node) If you are in charge of evaluating Tabular as analytical engine, or if you have to design your solution based on Tabular, this white paper is a must read. But if you just want to increase your knowledge of Analysis Services, you will find a lot of useful technical information. That said, my favorite quote of the document is the following one, funny but true: […] After several trials, the clear winner was a video gaming machine that one guy on the team used at home. That computer outperformed any available server, running twice as fast as the server-class machines we had in house. At that point, it was clear that the criteria for choosing the server would have to be expanded a bit, simply because it would have been impossible to convince the boss to build a cluster of gaming machines and trust it to serve our customers.  But, honestly, if a business has the flexibility to buy gaming machines (assuming the machines can handle capacity) – do this. Owen Graupman, inContact I want to write a longer discussion about how companies are adopting Tabular in scenarios where it is the hidden engine of a more complex solution (and not the classical “BI system”), because it is more frequent than you might expect (and has several advantages over many alternative approaches).

    Read the article

  • Performance impact: What is the optimal payload for SqlBulkCopy.WriteToServer()?

    - by Linchi Shea
    For many years, I have been using a C# program to generate the TPC-C compliant data for testing. The program relies on the SqlBulkCopy class to load the data generated by the program into the SQL Server tables. In general, the performance of this C# data loader is satisfactory. Lately however, I found myself in a situation where I needed to generate a much larger amount of data than I typically do and the data needed to be loaded within a confined time frame. So I was driven to look into the code...(read more)

    Read the article

  • How can I parse Amazon S3 log files?

    - by artlung
    What are the best options for parsing Amazon S3 (Simple Storage) log files? I've turned on logging and now I have log files that look like this: 858e709ba90996df37d6f5152650086acb6db14a67d9aaae7a0f3620fdefb88f files.example.com [08/Jul/2010:10:31:42 +0000] 68.114.21.105 65a011a29cdf8ec533ec3d1ccaae921c 13880FBC9839395C REST.GET.OBJECT example.com/blog/wp-content/uploads/2006/10/kitties_we_cant_stop_here_this_is_bat_country.jpg "GET /example.com/blog/wp-content/uploads/2006/10/kitties_we_cant_stop_here_this_is_bat_country.jpg HTTP/1.1" 200 - 32957 32957 12 10 "http://atlanta.craigslist.org/forums/?act=Q&ID=163218891" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.19) Gecko/2010031422 Firefox/3.0.19" - What are the best options for automating the log files? I'm not using any other Amazon services other than S3.

    Read the article

  • T-SQL Tuesday 24: Ode to Composable Code

    - by merrillaldrich
    I love the T-SQL Tuesday tradition, started by Adam Machanic and hosted this month by Brad Shulz . I am a little pressed for time this month, so today’s post is a short ode to how I love saving time with Composable Code in SQL. Composability is one of the very best features of SQL, but sometimes gets picked on due to both real and imaginary performance worries. I like to pick composable solutions when I can, while keeping the perf issues in mind, because they are just so handy and eliminate so much...(read more)

    Read the article

  • Cumulative Update #1 for SQL Server 2005 SP4

    - by AaronBertrand
    Well, much quicker than I would have suspected, the SQL Server Release Services team has incorporated all of the fixes in 2005 SP3's CU #12 into the first CU for SP4. Thanks to Chris Wood for the heads up. You can get the new Cumulative Update here: KB #2464079 : Cumulative update package 1 for SQL Server 2005 Service Pack 4 The nice round number of build 5000 didn't last long either; this CU will update you from 9.00.5000 to 9.00.5254....(read more)

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 28 (sys.dm_db_stats_properties)

    - by Tamarick Hill
    The sys.dm_db_stats_properties Dynamic Management Function returns information about the statistics that are currently on your database objects. This function takes two parameters, an object_id and a stats_id. Let’s have a look at the result set from this function against the AdventureWorks2012.Sales.SalesOrderHeader table. To obtain the object_id and stats_id I will use a CROSS APPLY with the sys.stats system table. SELECT sp.* FROM sys.stats s CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.Stats_id) sp WHERE sp.object_id = object_id('Sales.SalesOrderHeader') The first two columns returned by this function are the object_id and the stats_id columns. The next column, ‘last_updated’, gives you the date and the time that a particular statistic was last updated. The next column, ‘rows’, gives you the total number of rows in the table as of the last statistic update date. The ‘rows_sampled’ column gives you the number of rows that were sampled to create the statistic. The ‘steps’ column represents the number of specific value ranges from the statistic histogram. The ‘unfiltered_rows’ column represents the number of rows before any filters are applied. If a particular statistic is not filtered, the ‘unfiltered_rows’ column will always equal the ‘rows’ column. Lastly we have the ‘modification_counter’ column which represents the number of modification to the leading column in a given statistic since the last time the statistic was updated. Probably the most important column from this Dynamic Management Function is the ‘last_updated’ column. You want to always ensure that you have accurate and updated statistics on your database objects. Accurate statistics are vital for the query optimizer to generate efficient and reliable query execution plans. Without accurate and updated statistics, the performance of your SQL Server would likely suffer. For more information about this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/jj553546.aspx Folllow me on Twitter @PrimeTimeDBA

    Read the article

  • Recorded Webcast Available: Extend SCOM to Optimize SQL Server Performance Management

    - by KKline
    Join me and Eric Brown, Quest Software senior product manager for SQL Server monitoring tools, as we discuss the server health-check capabilities of Systems Center Operations Manager (SCOM) in this previously recorded webcast. We delve into techniques to maximize your SCOM investment as well as ways to complement it with deeper monitoring and diagnostics. You’ll walk away from this educational session with the skills to: Take full advantage of SCOM’s value for day-to-day SQL Server monitoring Extend...(read more)

    Read the article

  • Linked servers and performance impact: Direction matters!

    - by Linchi Shea
    When you have some data on a SQL Server instance (say SQL01) and you want to move the data to another SQL Server instance (say SQL02) through openquery(), you can either push the data from SQL01, or pull the data from SQL02. To push the data, you can run a SQL script like the following on SQL01, which is the source server: -- The push script -- Run this on SQL01 use testDB go insert openquery(SQL02, 'select * from testDB.dbo.target_table') select * from source_table; To pull the data, you can run...(read more)

    Read the article

  • New SSIS features and enhancements in Denali – a webinar on 28th June in association with Pragmatic Works

    - by jamiet
    Tomorrow I shall be presenting a webinar entitled “New SSIS features and enhancements in Denali”. The webinar is being hosted by Pragmatic Works and you can sign up for it at Pragmatic Works webinars. The webinar will start at 1930BST and you can view the time for your timezone at this link: http://www.timeanddate.com/worldclock/fixedtime.html?msg=New+SSIS+features+and+enhancements+in+Denali&iso=20110628T1830 The webinar was arranged a few months ago and at that time we were hoping that the next Community Technology Preview (CTP) of SQL Server Denali would be available for public consumption; unfortunately it transpires that that is not yet the case and hence I will be presenting new features of CTP1 that was released at the start of this year. If you’re not yet familiar with the new features of SSIS that are coming in the next release of SQL Server then please do come and join the webinar. @Jamiet

    Read the article

  • Thoughts on Nexus in SQL Server PDW

    - by jamiet
    I have been on a SQL Server Parallel Data Warehouse (aka PDW) training course this week and was interested to learn that you can't (yet) use SQL Server Management Studio (SSMS) against PDW, instead they ship a 3rd party tool called Nexus Chameleon. This was a bit of a disappointment at the beginning of the week (I'd prefer parity across SQL Server editions) but actually, having used Nexus for 3 days, I'm rather getting used to it. Some of it is a bit clunky (e.g. everything goes via an ODBC DSN) but once you get into using it its the epitome of "it just works". For example, over the past few years I have come to rely on intellisense in SSMS and have learnt to cope with its nuances. There is no intellisense in Nexus but you know what....I don't really miss it that much. In a sense its a breath of fresh air not having to hope that you've crossed the line into that will it work/won't it work grey area with SSMS intellisense. And I don't end up with writing @@CONNECTIONS instead of FROM anymore (anyone else suffer from this?) :) Moreover, Nexus is a standalone tool. Its not a bunch of features shoehorned into something else (Visual Studio). Another thing I like about Nexus is that you can actually do something with your resultset client-side. Take a look at the screenshots below:   You can see Nexus allows you to group a resultest by a column or set of columns. Nice touch. I know that many people have submitted Connect requests asking for the ability to do similar things in SSMS that would mean we don't have to copy resultsets into Excel (I know I have) - Nexus is a step in that direction. Its refreshing to use a tool that just gets out of the way yet still has some really useful features. How ironic that it gets shipped inside an edition of SQL Server! If I had the option of using Nexus in my day job I suspect that over time I would probably gravitate back to SSMS because as yet I haven’t really stretched Nexus’ capabilities, overall SSMS *does* have more features and up until now I've never really had any objections to it ... but its been an interesting awakening into the nuances that plague SSMS. Anyone else used Nexus? Any thoughts on it? @Jamiet

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • ALT.NET Seattle

    - by GeekAgilistMercenary
    Time to rock the ALT.NET scene and head up to the conference this weekend.  I must say, out of all the conferences I have been to the ALT.NET Conference is by far one of the best.  Great minds, great attitudes, awesome chances to learn, awesome changes to expand on one's ideas with others that hit on the same hurdles!  All in all, last year was great and I am expecting it to be a great conference this year also. For more information check out the ALT.NET site: http://2010conf.altnetseattle.org/ To get more involved in the monthly ALT.NET events in Seattle: http://groups.google.com/group/altnetseattle http://www.facebook.com/group.php?gid=111345965570 http://www.altnetseattle.org/ If you are in the Seattle area this weekend, be sure to hit up the conference. For original entry and other blog entries check out my personal blog.

    Read the article

  • Logical and Physical Modeling for Analytical Applications

    - by Dejan Sarka
    I am proud to announce that my first course for Pluralsight is released. The course title is Logical and Physical Modeling for Analytical Applications. Here is the description of the course. A bad data model leads to an application that does not perform well. Therefore, when developing an application, you should create a good data model from the start. However, even the best logical model can’t help when the physical implementation is bad. It is also important to know how SQL Server stores and accesses data, and how to optimize the data access. Database optimization starts by splitting transactional and analytical applications. In this course, you learn how to support analytical applications with logical design, get understanding of the problems with data access for queries that deal with large amounts of data, and learn about SQL Server optimizations that help solving these problems. Enjoy the course!

    Read the article

  • PASS Call for Speakers

    - by Paul Nielsen
    It's that time again - the PASS Summit 2010 (Seattle Nov 8-11) Call for Speakers is now open and accepting abstracts until June 5 th . personally, I'm on a pattern that on odd years I present what I'm excited about, and on even years I try try to proesent what I expect other are jazzed about, which takes a bit more work. Last year I offered to Coach any Pass Speakers for free and some success. I’m offering that service again startign with your abstracts. If you’d like me to review your abstracts...(read more)

    Read the article

  • Can you over-normalize?

    - by drsql
    Now, don’t get too excited and grab your pitchforks and torches. Clearly, it is extremely possible to overdo something in the design, but very often normalization takes the rap as being the culprit. In my “Database Design Fundamentals” presentation, one of my favorite things to do is ask “What is the most important normal form?” 9 out of 10 times, someone answers “Third”. When I ask what they have against fourth, the usually say that it makes the database work too slow. But when they find out that...(read more)

    Read the article

  • Awesome Bookmarklet Collection Ready to Select from and Add to Your Browser

    - by Asian Angel
    Bookmarklets are extremely useful additions to have for your everyday browsing needs without the hassle (or slowdown effect) of extensions. With that in mind tech blog Guiding Tech has put together a terrific collection of 21 bookmarklets that are ready to add to your favorite browser. Just scroll down and select/install the bookmarklets you like from the blog post and enjoy the enhanced browsing! You can see the beginning and end results from our sample use of the Search Site Bookmarklet in the screenshots above and below… Note: We altered the bookmarklet slightly to focus the search results through Google Singapore. How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • Five Things To Which SQL Server Should Say "Goodbye and Good Riddance"

    - by Adam Machanic
    I was tagged by master blogger Aaron Bertrand and asked to identify five things that should be removed from SQL Server. Easy enough, or so I thought... 1) Tempdb . But I should qualify that a bit. Tempdb is absolutely necessary for SQL Server to properly function, but in its current state is easily the number one bottleneck in the majority of SQL Server instances. Many other DBMS vendors abandoned the "monolithic, instance-scoped temporary data space" years ago, yet SQL Server soldiers on, putting...(read more)

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Time for the yearly Microsoft Pilgrimage known as the MVP Summit

    - by drsql
    One of the most fun events of the year is the MVP Summit if perhaps for no other reason than my preparation for it is packing a suitcase. No presentations to write, prepare for, nothing. No activities that I have to do anything more than show up with my 2 shovels. One for the amazing amounts of knowledge that will be flowing from the Microsoft folks to us, and the other is actually more of a fork, to get all of the great food they serve us in with.  The whole event is a lot like any other conference,...(read more)

    Read the article

  • Looking Back at PASS Summit 2013 - Location

    - by RickHeiges
    Now that it has been a few weeks since the Summit, I wanted to look back at the location "experiment". Convention Center - It seemed to work well for the conference. There were quite a few areas in the area where you could sit down and get some work down or have a discussion. For the larger welcome reception the first night, I really liked the different areas. If you wanted to enjoy the Quiz Bowl, the ballroom area was set up nicely with big screens so that everyone could see and hear. The area right...(read more)

    Read the article

  • Book Review (Book 11) - Applied Architecture Patterns on the Microsoft Platform

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for April 2012 was: Applied Architecture Patterns on the Microsoft Platform. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: I actually know a few of the authors on this book, so when they told me about it I wanted to check it out. The premise of the book is exactly as it states in the title - to learn how to solve a problem using products from Microsoft. What I learned: I liked the book - a lot. They've arranged the content in a "Solution Decision Framework", that presents a few elements to help you identify a need and then propose alternate solutions to solve them, and then the rationale for the choice. But the payoff is that the authors then walk through the solution they implement and what they ran into doing it. I really liked this approach. It's not a huge book, but one I've referred to again since I've read it. It's fairly comprehensive, and includes server-oriented products, not things like Microsoft Office or other client-side tools. In fact, I would LOVE to have a work like this for Open Source and other vendors as well - would make for a great library for a Systems Architect. This one is unashamedly aimed at the Microsoft products, and even if I didn't work here, I'd be fine with that. As I said, it would be interesting to see some books on other platforms like this, but I haven't run across something that presents other systems in quite this way. And that brings up an interesting point - This book is aimed at folks who create solutions within an organization. It's not aimed at Administrators, DBA's, Developers or the like, although I think all of those audiences could benefit from reading it. The solutions are made up, and not to a huge level of depth - nor should they be. It's a great exercise in thinking these kinds of things through in a structured way. The information is a bit dated, especially for Windows and SQL Azure. While the general concepts hold, the cloud platform from Microsoft is evolving so quickly that any printed book finds it hard to keep up with the improvements. I do have one quibble with the text - the chapters are a bit uneven. This is always a danger with multiple authors, but it shows up in a couple of chapters. I winced at one of the chapters that tried to take a more conversational, humorous style. This kind of academic work doesn't lend itself to that style. I recommend you get the book - and use it. I hope they keep it updated - I'll be a frequent customer. :)  

    Read the article

  • SQL File Layout Viewer 1.2

    - by merrillaldrich
    Just ahead of presenting it at SQL Saturday in my home town of Minneapolis / Saint Paul, I’m happy to release an updated version of the SQL Server File Layout Viewer. This is a utility I released back in March for inspecting the arrangement of data pages in SQL Server files. If you will be in Minneapolis this Saturday (space permitting), please come out and see this tool in action! New Features Based on feedback from others in the SQL Server community, I made these enhancements: Page types now provide...(read more)

    Read the article

  • I Could Sure Use Some Windows Azure Code Samples…

    - by BuckWoody
    There are multiple ways to learn, and one of the most effective is with examples. You have multiple options with Windows Azure, including the Software Development Kit, the Windows Azure Training Kit and now another one…. the Microsoft All-In-One Code Framework,, a free, centralized code sample library driven by developers' real-world pains and needs. The goal is to provide customer-driven code samples for all Microsoft development technologies, and Windows Azure is included. Once you hit the site, you download an EXE that will create a web-app based installer for a Code Browser.   Once inside, you can configure where the samples store data and other settings, and then search for what you want.   You can also request a sample – if enough people ask, we do it. OneCode also partnered with gallery and Visual Studio team to develop this Sample Browser Visual Studio extension.  It’s an easy way for developers to find and download samples from within Visual Studio.  

    Read the article

  • top tweets SOA Partner Community – June 2013

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Oracle SOA Learn how Business Rules are used in Oracle SOA Suite. New free self-study course - Oracle Univ. #soa #oraclesoa http://pub.vitrue.com/ll9B OPITZ CONSULTING ?Wie #BPM und #SOA zusammengehören? Watch 100-Seconds-Video-Lesson by @Rolfbaer - http://ow.ly/luSjK @soacommunity Andrejus Baranovskis ?Customized BPM 11g PS6 Workspace Application http://fb.me/2ukaSBXKs Mark Nelson ?Case Management Samples Released http://wp.me/pgVeO-Lv Mark Nelson Instance Patching Demo for BPM 11.1.1.7 http://wp.me/pgVeO-Lx Simone Geib Antony Reynolds: Target Verification #oraclesoa https://blogs.oracle.com/reynolds/ OPITZ CONSULTING ?"It's all about Integration - Developing with Oracle #Cloud Services" @t_winterberg files: http://ow.ly/ljtEY #cloudworld @soacommunity Arun Pareek ?Functional Testing Business Processes In Oracle BPM Suite 11g http://wp.me/pkPu1-pc via @arrunpareek SOA Proactive Want to get started with Human Workflow? Check out the introductory video on OTN, http://pub.vitrue.com/enIL C2B2 Consulting Free tech workshop,London 6th of Jun Diagnosing Performance & Scalability Problems in Oracle SOASuite http://www.c2b2.co.uk/oracle_fusion_middleware_performance_seminar … @soacommunity Oracle BPM Must have technologies for delivering effective #CX : #BPM #Social #Mobile > #OracleBPM Whitepaper http://pub.vitrue.com/6pF6 OracleBlogs ?Introduction to Web Forms -Basic Tutorial http://ow.ly/2wQLTE OTNArchBeat ?Complete State of SOA podcast now available w/ @soacommunity @hajonormann @gschmutz @t_winterberg #industrialsoa http://pub.vitrue.com/PZFw Ronald Luttikhuizen VENNSTER Blog | Article published - Fault Handling and Prevention - Part 2 | http://blog.vennster.nl/2013/05/article-published-fault-handling-and.html … Mark Nelson ?Getting to know Maven http://wp.me/pgVeO-Lk gschmutz ?Cool! Our 2nd article has just been published: "Fault Handling and Prevention for Services in Oracle Service Bus" http://pub.vitrue.com/jMOy David Shaffer Interesting SOA Development and Delivery post on A-Team Redstack site - http://bit.ly/18oqrAI . Would be great to get others to contribute! Mark Nelson BPM PS6 video showing process lifecycle in more detail (30min) http://wp.me/pgVeO-Ko SOA Proactive ?Webcast: 'Introduction and Troubleshooting of the SOA 11g Database Adapter', May 9th. Register now at http://pub.vitrue.com/8In7 Mark Nelson ?SOA Development and Delivery http://wp.me/pgVeO-Kd Oracle BPM Manoj Das, VP Product Mangement talks about new #OracleBPM release #BPM #processmanagement http://pub.vitrue.com/FV3R OTNArchBeat Podcast: The State of SOA w/ @soacommunity @hajonormann @gschmutz @t_winterberg #industrialsoa http://pub.vitrue.com/OK2M gschmutz New article series on Industrial SOA started on OTN and Service Technology Magazine: http://guidoschmutz.wordpress.com/2013/04/22/first-two-chapters-of-industrial-soa-articles-series-have-been-published-both-on-otn-and-service-technology-magazine/ … #industrialSOA Danilo Schmiedel ?Article series #industrialSOA published on OTN and Service Technology Magazine http://inside-bpm-and-soa.blogspot.de/2013/04/industrial-soa_22.html … @soacommunity @OC_WIRE SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: twitter,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >