Search Results

Search found 10594 results on 424 pages for 'umbraco blog'.

Page 121/424 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • FIX: A network error occurred during SQL Server Native Client installation

    - by Greg Low
    One of the things that I have been pestering the SQL team to do is to name their updates according to what is contained in them. For example, instead of just: sqlncli.msi What I'd prefer is that the file was called something like: SQLServerNativeClient2k8SP1CU3x64.msi So I normally rename them as soon as I receive them, to avoid confusion in future. However, today I found that doing so caused me a problem. After renaming the file, and installing it, the installation failed with the error: " A network...(read more)

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • SQL Server 2008 R2: StreamInsight changes at RTM: AdvanceTimeSettings

    - by Greg Low
    For those that have worked with the earlier versions of the simulator that Bill Chesnut and I constructed for the Metro content (the Highway Simulator), changes are also required to how AdvanceTimeSettings are specified. The AdapterAdvanceTimeSettings value is now generated by binding an AdvanceTimeGenerationSettings (that is based on your adapter configuration) with an AdvanceTimePolicy setting. public class TollPointInputFactory : ITypedInputAdapterFactory < TollPointInputConfig >, ITypedDeclareAdvanceTimeProperties...(read more)

    Read the article

  • Strange date relationships with #PowerPivot

    - by Marco Russo (SQLBI)
    A reader of my PowerPivot book highlighted a strange behavior of the relationship between a datetime column and a Calendar table. Long story short: it seems that PowerPivot automatically round the date to the “neareast day”, but instead of simply removing the time (truncating the decimal part of the decimal number internally used to represent a datetime value) a rounding function seems used, moving the date to the next day if the time part contain a PM time. As you can imagine, this becomes particularly...(read more)

    Read the article

  • finding a WUXGA or matte laptop

    - by John Paul Cook
    UPDATED: HP still sells 17" WUXGA laptops - details in the new paragraph at the end. Lenovo, Dell, Sony and Sager do not sell a 1920x1200 (WUXGA) laptop. I understand that manufacturers provide what there is market demand for. I also understand that HDTV and the 1080p standard is heavily influencing both monitor and laptop screen resolutions. But I do not understand why there is so little demand for a WUXGA laptop. Nor do I understand the popularity of glossy displays. I really don't like to look...(read more)

    Read the article

  • Is Data Science “Science”?

    - by BuckWoody
    I hold the term “science” in very high esteem. I grew up on the Space Coast in Florida, and eventually worked at the Kennedy Space Center, surrounded by very intelligent people who worked in various scientific fields. Recently a new term has entered the computing dialog – “Data Scientist”. Since it’s not a standard term, it has a lot of definitions, and in fact has been disputed as a correct term. After all, the reasoning goes, if there’s no such thing as “Data Science” then how can there be a Data Scientist? This argument has been made before, albeit with a different term – “Computer Science”. In Peter Denning’s excellent article “Is Computer Science Science” (April  2005/Vol. 48, No. 4 COMMUNICATIONS OF THE ACM) there are many points that separate “science” from “engineering” and even “art”.  I won’t repeat the content of that article here (I recommend you read it on your own) but will leverage the points he makes there. Definition of Science To ask the question “is data science ‘science’” then we need to start with a definition of terms. Various references put the definition into the same basic areas: Study of the physical world Systematic and/or disciplined study of a subject area ...and then they include the things studied, the bodies of knowledge and so on. The word itself comes from Latin, and means merely “to know” or “to study to know”. Greek divides knowledge further into “truth” (episteme), and practical use or effects (tekhne). Normally computing falls into the second realm. Definition of Data Science And now a more controversial definition: Data Science. This term is so new and perhaps so niche that the major dictionaries haven’t yet picked it up (my OED reference is older – can’t afford to pop for the online registration at present). Researching the term's general use I created an amalgam of the definitions this way: “Studying and applying mathematical and other techniques to derive information from complex data sets.” Using this definition, data science certainly seems to be science - it's learning about and studying some object or area using systematic methods. But implicit within the definition is the word “application”, which makes the process more akin to engineering or even technology than science. In fact, I find that using these techniques – and data itself – part of science, not science itself. I leave out the concept of studying data patterns or algorithms as part of this discipline. That is actually a domain I see within research, mathematics or computer science. That of course is a type of science, but does not seek for practical applications. As part of the argument against calling it “Data Science”, some point to the scientific method of creating a hypothesis, testing with controls, testing results against the hypothesis, and documenting for repeatability.  These are not steps that we often take in working with data. We normally start with a question, and fit patterns and algorithms to predict outcomes and find correlations. In this way Data Science is more akin to statistics (and in fact makes heavy use of them) in the process rather than starting with an assumption and following on with it. So, is Data Science “Science”? I’m uncertain – and I’m uncertain it matters. Even if we are facing rampant “title inflation” these days (does anyone introduce themselves as a secretary or supervisor anymore?) I can tolerate the term at least from the intent that we use data to study problems across a wide spectrum, rather than restricting it to a single domain. And I also understand those who have worked hard to achieve the very honorable title of “scientist” who have issues with those who borrow the term without asking. What do you think? Science, or not? Does it matter?

    Read the article

  • A new Excel 2010 book for Data Analysis

    - by Marco Russo (SQLBI)
    Microsoft Press just announced the printing of Microsoft Excel 2010: Data Analysis and Business Modeling , which is the third edition of the book written by Wayne L. Winston covering many data analysis and modeling techniques using a very clear problem-solution approach, including a good statistical explanation whenever it is necessary. I suggest this book as a good complement to our Microsoft PowerPivot for Excel 2010: Give Your Data Meaning !...(read more)

    Read the article

  • Starting this week: Dublin, Maidenhead, and London

    - by KKline
    This might be most most overcommitted four-week period of time ever in my life. I’m tired just thinking about it! Not only am I traveling internationally and speaking over the next few weeks, I’m also helping on two book projects, learning some new applications from Quest Software, and helping on a small Transact-SQL refactoring project. Swag on hand? I’ve got a special printing of 500 video training DVDs for this trip: SQL Server Training on DMVs Performance Monitor and Wait Events Plus, I’ll have...(read more)

    Read the article

  • Is the Internet Making us Smarter or Not?

    - by BuckWoody
    I’ve been reading recently about an exchange among some very bright folks, some who posit that the Internet with its instant-on, sometimes-right, big-statement-wins mentality is making people think in a more shallow way, teaching us to rely on others as experts and diluting our logical thought process. Others state that it broadens our perspective and extends our mental reach. Whenever I see this kind of exchange on two ends of a spectrum, I begin to wonder if both sides might be correct.   I can certainly say that I have changed my way of learning, reading, and social interactions because of the Internet. And my tolerance for reading long missives has indeed gone down. I tend to (mentally and literally) “bookmark” things I never seem to have time to get back to. But I also agree that I’ve been exposed to thoughts, ideas and people I never would have encountered any other way. So how to deal with this dichotomy?   Well, I’m going to go off and think about it. No, I’m really going to go off for a full week to a cabin I’ve rented in a National Forest in the Midwest. It has no indoor plumbing, phones, Internet connections or anything else – only a bed to sleep in and a place to cook a little. I’m taking one book, some paper, and a guitar with me and that’s it. I plan to spend my days walking, reading a little, playing a little on the guitar, but mostly just thinking. Those of you who know me might find this unusual. I’m an always-on, hyper-caffeinated, overly-busy, connected person. I haven’t taken a vacation in five years, at least for more than two or three days at a time. Even then, I keep us on the move constantly – our vacations aren’t cruises or anything like that. I check e-mail, post and all that. When I’m not on vacation, I live with and leverage lots of technology, and work with those that do the same. This, however, is a really “unplugged” event, and I’m hoping that it will let me unpack the things I’ve been stuffing in my head. I plan to spend a lot of time on a single subject, writing notes, thinking, and writing more notes.   So after I post tomorrow's “quote of the day” I’ll be “going dark” for a week. No twitter, FaceBook, LinkedIn, e-mail, chat, none of my five blogs will get updated, and I’ll have to turn in my two articles for InformIT.com early. I won’t have access to my college class portal, so my students will be without me for a week. I will really be offline. I’ll see you in a week – hopefully a little more educated. See you then.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is the Internet Making us Smarter or Not?

    - by BuckWoody
    I’ve been reading recently about an exchange among some very bright folks, some who posit that the Internet with its instant-on, sometimes-right, big-statement-wins mentality is making people think in a more shallow way, teaching us to rely on others as experts and diluting our logical thought process. Others state that it broadens our perspective and extends our mental reach. Whenever I see this kind of exchange on two ends of a spectrum, I begin to wonder if both sides might be correct.   I can certainly say that I have changed my way of learning, reading, and social interactions because of the Internet. And my tolerance for reading long missives has indeed gone down. I tend to (mentally and literally) “bookmark” things I never seem to have time to get back to. But I also agree that I’ve been exposed to thoughts, ideas and people I never would have encountered any other way. So how to deal with this dichotomy?   Well, I’m going to go off and think about it. No, I’m really going to go off for a full week to a cabin I’ve rented in a National Forest in the Midwest. It has no indoor plumbing, phones, Internet connections or anything else – only a bed to sleep in and a place to cook a little. I’m taking one book, some paper, and a guitar with me and that’s it. I plan to spend my days walking, reading a little, playing a little on the guitar, but mostly just thinking. Those of you who know me might find this unusual. I’m an always-on, hyper-caffeinated, overly-busy, connected person. I haven’t taken a vacation in five years, at least for more than two or three days at a time. Even then, I keep us on the move constantly – our vacations aren’t cruises or anything like that. I check e-mail, post and all that. When I’m not on vacation, I live with and leverage lots of technology, and work with those that do the same. This, however, is a really “unplugged” event, and I’m hoping that it will let me unpack the things I’ve been stuffing in my head. I plan to spend a lot of time on a single subject, writing notes, thinking, and writing more notes.   So after I post tomorrow's “quote of the day” I’ll be “going dark” for a week. No twitter, FaceBook, LinkedIn, e-mail, chat, none of my five blogs will get updated, and I’ll have to turn in my two articles for InformIT.com early. I won’t have access to my college class portal, so my students will be without me for a week. I will really be offline. I’ll see you in a week – hopefully a little more educated. See you then.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQLPass NomCom election: Why I voted twice

    - by Hugo Kornelis
    Did you already cast your votes for the SQLPass NomCom election ? If not, you really should! Your vote can make a difference, so don’t let it go to waste. The NomCom is the group of people that prepares the elections for the SQLPass Board of Directors. With the current election procedures, their opinion carries a lot of weight. They can reject applications, and the order in which they present candidates can be considered a voting advice. So use care when casting your votes – you are giving a lot...(read more)

    Read the article

  • With Choice Comes Complexity

    - by BuckWoody
    "Complex" may be defined as "Having many steps, details or parts." Many of Microsoft's products, including SQL Server, can be complex. I'm stating what most data professionals already know - there's usually multiple ways to do things in SQL Server. For instance, to import some data into a table you can use graphical tools, SQLCMD, bcp, SQL Server Integration Services, BULK INSERT, even PowerShell, just to name a few tools at your disposal. That's really not the issue, though. The bigger issue is that there are normally multiple thought-processes, or methods, that you have available for a task. That's both a strength and a weakness. If things were more simple, you would have fewer choices. Sometimes that's a good thing. Just tell me what I need to do and I'll do it. However, your particular situation may not fit that tool or process, so having more options increases your ability to get your job done the way you need to do it. On the other hand, that's more for you to learn, which is harder. There's another side of this benefit/difficulty that you need to be aware of. Even if you're quite good at what you do, keep in mind that the way you know how to do something may not be the only way to do it. Keep your mind open to new possibilities, and most importantly - to new knowledge. SQL Server professionals teach me something new every day. So embrace the complexity - on balance, it's a good thing! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    Last Updated: 02/01/2011 It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): link   Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: link Microsoft Visual C# 2008 Step by Step: link  c The first place to start is at the official site for the certification. link c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform: link Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform (2nd mention) Links: My Notes:

    Read the article

  • Visualize Disaster

    - by merrillaldrich
    Or, How Mirroring Off-Site Saved my #Bacon My company does most things right. Our management is very supportive, listens and generally funds the technology that makes sense for the best interest of the organization. We have good redundancy, HA and disaster recovery in place that fit our objectives. Still, as they say, bad things can happen to good people. This weekend we did have an outage despite our best efforts, and that’s the reason for this post. It went pretty well for my team, all things considered,...(read more)

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • F# &ndash; It&rsquo;s time to grow up baby.

    - by MarkPearl
    In the last few months since I started learning F# I have begin to notice an increase in the number of people blogging about the language. Sure, it could just be that I am noticing it more because I am actively looking out for questions and blog posts, but even in my day to day reading of Code Project Daily News and Stack Overflow questions there seems to be increased activity around the language. So what sparked this post? Well, today today I logged in and saw that the latest podcast by DNR was on F# and then immediately afterwards I received an email from CodeProject with a great article comparing F# and Scala. Currently, as of this blog posting (21 May 2010) F# is ranked on the tiobe site at position 43, but I am willing to put money on it that in the next few tiobe ratings this ranking will continue to rise till F# will be a top 20.

    Read the article

  • Script to create or drop all primary keys now on TechNet Wiki.

    - by John Paul Cook
    I posted my script to create or drop all primary keys on the TechNet Wiki. You can find it at http://social.technet.microsoft.com/wiki/contents/articles/script-to-create-or-drop-all-primary-keys.aspx . I first published the script here in 2009 and I've always wanted a way for the community to enhance it or correct it. The TechNet Wiki makes that possible. Visit the Wiki and see if you like this approach to publishing scripts....(read more)

    Read the article

  • How SQL Saturday could be better

    - by AaronBertrand
    I've been to a lot of SQL Saturdays. They are great events to attend - from a community standpoint, from a learning standpoint, and from a speaker growth standpoint. Who could ask for more, right? Great sessions, from passionate speakers willing to both teach and learn, fantastic networking opportunities and lunch. All for free, or at a very low cost - some events need to recover costs and charge $10 for lunch. Still a phenomenal bargain IMHO. But we all know that these events aren't perfect... there...(read more)

    Read the article

  • When was sys.dm_os_wait_stats last cleared?

    - by SQLOS Team
    The sys.dm_os_wait_stats DMV provides essential metrics for diagnosing SQL Server performance problems. Returning incrementally accumulating information about all the completed waits encountered by executing threads it is a useful way to identify bottlenecks such as IO latency issues or waits on locks. The counters are reset each time SQL server is restarted, or when the following command is run: DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR); To make sense out of these wait values you need to know how they change over time. Suppose you are asked to troubleshoot a system and you don't know when the wait stats were last zeroed. Is there any way to find the elapsed time since this happened? If the wait stats were not cleared using the DBCC SQLPERF command then you can simply correlate the stats with the time SQL Server was started using the sqlserver_start_time column introduced in SQL Server 2008 R2: SELECT sqlserver_start_time from sys.dm_os_sys_info However how do you tell if someone has run DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR) since the server was started, and if they did, when? Without this information the initial, or historical, wait_stats have less value until you can measure deltas over time. There is a way to at least estimate when the stats were last cleared, by using the wait stats themselves and choosing a thread that spends most of its time sleeping. A good candidate is the SQL Trace incremental flush task, which mostly sleeps (in 4 second intervals) and in between it attempts to flush (if there are new events – which is rare when only default trace is running) – so it pretty much sleeps all the time. Hence the time it has spent waiting is very close to the elapsed time since the counter was reset. Credit goes to Ivan Penkov in the SQLOS dev team for suggesting this. Here's an example (excuse formatting): 144 seconds after the server was started: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                               wait_time_ms--------------------------------------------------------------------------------------------------------------- XE_DISPATCHER_WAIT                                      242273LAZYWRITER_SLEEP                                          146010LOGMGR_QUEUE                                                145412DIRTY_PAGE_POLL                                             145411XE_TIMER_EVENT                                               145216REQUEST_FOR_DEADLOCK_SEARCH             145194SQLTRACE_INCREMENTAL_FLUSH_SLEEP    144325SLEEP_TASK                                                        73359BROKER_TO_FLUSH                                           73113PREEMPTIVE_OS_AUTHENTICATIONOPS       143 (10 rows affected) Reset: DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR)" DBCC execution completed. If DBCC printed error messages, contact your system administrator. After 8 seconds: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                                 wait_time_ms--------------------------------------------------------------------------------------------------------------------- REQUEST_FOR_DEADLOCK_SEARCH              10013LAZYWRITER_SLEEP                                           8124SQLTRACE_INCREMENTAL_FLUSH_SLEEP     8017LOGMGR_QUEUE                                                 7579DIRTY_PAGE_POLL                                              7532XE_TIMER_EVENT                                                5007BROKER_TO_FLUSH                                            4118SLEEP_TASK                                                         3089PREEMPTIVE_OS_AUTHENTICATIONOPS        28SOS_SCHEDULER_YIELD                                   27 (10 rows affected)   After 12 seconds: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                                  wait_time_ms------------------------------------------------------------------------------------------------------ REQUEST_FOR_DEADLOCK_SEARCH               15020LAZYWRITER_SLEEP                                            14206LOGMGR_QUEUE                                                  14036DIRTY_PAGE_POLL                                               13973SQLTRACE_INCREMENTAL_FLUSH_SLEEP      12026XE_TIMER_EVENT                                                 10014SLEEP_TASK                                                          7207BROKER_TO_FLUSH                                             7207PREEMPTIVE_OS_AUTHENTICATIONOPS         57SOS_SCHEDULER_YIELD                                     28 (10 rows affected) It may not be accurate to the millisecond, but it can provide a useful data point, and give an indication whether the wait stats were manually cleared after startup, and if so approximately when. - Guy     Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Understanding #DAX Query Plans for #powerpivot and #tabular

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote a very interesting white paper about DAX query plans. We published it on a page where we'll gather articles and tools about DAX query plans: http://www.sqlbi.com/topics/query-plans/I reviewed the paper and this is the result of many months of study - we know that we just scratched the surface of this topic, also because we still don't have enough information about internal behavior of many of the operators contained in a query plan. However, by reading the paper you will start reading a query plan and you will understand how it works the optimization found by Chris Webb one month ago to the events-in-progress scenario. The white paper also contains a more optimized query (10 time faster), even if the performance depends on data distribution and the best choice really depends on the data you have. Now you should be curious enough to read the paper until the end, because the more optimized query is the last example in the paper!

    Read the article

  • Book Review: Expert Cube Development with Microsoft SQL Server 2008 Analysis Services

    - by Greg Low
    I spent last week on campus in Redmond with the SQL Server Analysis Services Maestro program. It was great to have a chance to focus on SSAS for a week. As part of that, I did quite a bit of reading as I had quite a bit of travelling time. Ironically, I re-read a few books. The first was Marco Russo, Alberto Ferrari and Chris Webb's book Expert Cube Development with Microsoft SQL Server 2008 Analysis Services . I've often told BI classes that I've been teaching that this is a really good book and...(read more)

    Read the article

  • Dev Lop

    - by Jason Franks
    Back in the early 90s, before I was a professional geek--much less a geek with a blog--I saw this old chop socky movie. I don't remember what it was called, or who was in it... all I remember is that, in one scene, the venerable sensei tells the hero: "You must develop your nunchaku technique." This became a bit fo a catchphrase amongst my high school mates. Well folks, I am developing my technuique. This blog has been renamed and the old posts removed--I could go into my reasons for this, but that would defeat the point of the exercise. Sorry if you liked 'em. It has been a good couple of years since I wrote anything here, so I doubt that I am putting out any regular readers. Will I be posting here more often, now that I've renamed and rethemed the place? I don't know. In the meantime, check it out: Bruce Lee playign ping pong with nunchaku. --JF

    Read the article

  • What types of objects are useful in SQL CLR?

    - by Greg Low
    I've had a number of people over the years ask about whether or not a particular type of object is a good candidate for SQL CLR integration. The rules that I normally apply are as follows: Database Object Transact-SQL Managed Code Scalar UDF Generally poor performance Good option when limited or no data-access Table-valued UDF Good option if data-related Good option when limited or no data-access Stored Procedure Good option Good option when external access is required or limited data access DML...(read more)

    Read the article

  • Participating in 3 SQL events in 8 days

    - by AaronBertrand
    Well, 8 days not including lead and lag travel time. A quick summary of the three events, and the flights it took to get me to each: SQL Saturday #105 - Dublin, Ireland Flights on March 21st: Providence -> Philadelphia (236 miles) Philadelphia -> Dublin (3,260 miles) Time zone change: +4 when we got there, plus Daylight Saving Time kicked in, so +5 after the event. I spoke at this event, and manned the SQL Sentry booth with cohort Scott Fallen. This event was a fantastic SQL Saturday - very...(read more)

    Read the article

  • Visual Studio 2012 and Oracle Development Environment

    - by John Paul Cook
    Creating a complete environment for developing .NET applications that target Oracle requires a little planning and understanding of how Oracle connectivity works. You need to be methodical and test along the way so that you aren’t trying to troubleshoot a multitude of interrelated problems at the end. I’ve made several assumptions in writing this post: You are using 64-bit Windows 7 because you are developer with a lot of ram. I think this post will help you even if you are running Windows 8 instead...(read more)

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >