Search Results

Search found 10580 results on 424 pages for 'blog adaptivesoftware biz'.

Page 116/424 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • The SQL Server Community

    - by AllenMWhite
    In case you weren't aware of it, I absolutely love the SQL Server community. The people I've gotten to know have amazing knowledge, and they love sharing that knowledge with anyone who wants to learn. How can you not love that? It's inspiring and humbling all at the same time. There are a number of venues where the SQL Server community comes together. I'm including Twitter , the PASS Summit , the various SQL Saturday events, SQLBits , Tech Ed , and the local user groups. Each of us takes part in...(read more)

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • The Case of the Missing Date/Time Stamp: Reporting Services 2008 R2 Snapshots

    - by smisner
    This week I stumbled upon an undocumented “feature” in SQL Server 2008 R2 Reporting Services as I was preparing a demonstration on how to set up and use report snapshots. If you’re familiar with the main changes in this latest release of Reporting Services, you probably already know that Report Manager got a facelift this time around. Although this facelift was generally a good thing, one of the casualties – in my opinion – is the loss of the snapshot label that served two purposes… First, it flagged the report as a snapshot. Second, it let you know when that snapshot was created. As part of my standard operating procedure when demonstrating report snapshots, I point out this label, so I was rather taken aback when I didn’t see it in the demonstration I was preparing. It sort of upset my routine, and I’m rather partial to my routines. I thought perhaps I wasn’t looking in the right place and changed Report Manager from Tile View to Detail View, but no – that label was still missing. In the grand scheme of life, it’s not an earth-shattering change, but you’ll have to look at the Modified Date in Details View to know when the snapshot was run. Or hope that the report developer included a textbox to show the execution time in the report. (Hint: this is a good time to add this to your list of report development best practices, whether a report gets set up as a report snapshot or not!) A snapshot from the past In case you don’t remember how a snapshot appeared in Report Manager back in the old days (of SQL Server 2008 and earlier), here’s an image I snagged from my Reporting Services 2008 Step by Step manuscript: A snapshot in the present A report server running in SharePoint integrated mode had no such label. There you had to rely on the Report Modified date-time stamp to know the snapshot execution time. So I guess all platforms are now consistent. Here’s a screenshot of Report Manager in the 2008 R2 version. One of these is a snapshot and the rest execute on demand. Can you tell which is the snapshot? Consider descriptions as an alternative So my report snapshot demonstration has one less step, and I’ll need to edit the Denali version of the Step by Step book. Things are simpler this way, but I sure wish we had an easier way to identify the execution methods of the reports. Consider using the description field to alert users that the report is a snapshot. It might save you a few questions about why the data isn’t up-to-date if the users know that something changed in the source of the report. Notice that the full description doesn’t display in Tile View, so keep it short and sweet or instruct users to open Details View to see the entire description.

    Read the article

  • When was sys.dm_os_wait_stats last cleared?

    - by SQLOS Team
    The sys.dm_os_wait_stats DMV provides essential metrics for diagnosing SQL Server performance problems. Returning incrementally accumulating information about all the completed waits encountered by executing threads it is a useful way to identify bottlenecks such as IO latency issues or waits on locks. The counters are reset each time SQL server is restarted, or when the following command is run: DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR); To make sense out of these wait values you need to know how they change over time. Suppose you are asked to troubleshoot a system and you don't know when the wait stats were last zeroed. Is there any way to find the elapsed time since this happened? If the wait stats were not cleared using the DBCC SQLPERF command then you can simply correlate the stats with the time SQL Server was started using the sqlserver_start_time column introduced in SQL Server 2008 R2: SELECT sqlserver_start_time from sys.dm_os_sys_info However how do you tell if someone has run DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR) since the server was started, and if they did, when? Without this information the initial, or historical, wait_stats have less value until you can measure deltas over time. There is a way to at least estimate when the stats were last cleared, by using the wait stats themselves and choosing a thread that spends most of its time sleeping. A good candidate is the SQL Trace incremental flush task, which mostly sleeps (in 4 second intervals) and in between it attempts to flush (if there are new events – which is rare when only default trace is running) – so it pretty much sleeps all the time. Hence the time it has spent waiting is very close to the elapsed time since the counter was reset. Credit goes to Ivan Penkov in the SQLOS dev team for suggesting this. Here's an example (excuse formatting): 144 seconds after the server was started: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                               wait_time_ms--------------------------------------------------------------------------------------------------------------- XE_DISPATCHER_WAIT                                      242273LAZYWRITER_SLEEP                                          146010LOGMGR_QUEUE                                                145412DIRTY_PAGE_POLL                                             145411XE_TIMER_EVENT                                               145216REQUEST_FOR_DEADLOCK_SEARCH             145194SQLTRACE_INCREMENTAL_FLUSH_SLEEP    144325SLEEP_TASK                                                        73359BROKER_TO_FLUSH                                           73113PREEMPTIVE_OS_AUTHENTICATIONOPS       143 (10 rows affected) Reset: DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR)" DBCC execution completed. If DBCC printed error messages, contact your system administrator. After 8 seconds: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                                 wait_time_ms--------------------------------------------------------------------------------------------------------------------- REQUEST_FOR_DEADLOCK_SEARCH              10013LAZYWRITER_SLEEP                                           8124SQLTRACE_INCREMENTAL_FLUSH_SLEEP     8017LOGMGR_QUEUE                                                 7579DIRTY_PAGE_POLL                                              7532XE_TIMER_EVENT                                                5007BROKER_TO_FLUSH                                            4118SLEEP_TASK                                                         3089PREEMPTIVE_OS_AUTHENTICATIONOPS        28SOS_SCHEDULER_YIELD                                   27 (10 rows affected)   After 12 seconds: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                                  wait_time_ms------------------------------------------------------------------------------------------------------ REQUEST_FOR_DEADLOCK_SEARCH               15020LAZYWRITER_SLEEP                                            14206LOGMGR_QUEUE                                                  14036DIRTY_PAGE_POLL                                               13973SQLTRACE_INCREMENTAL_FLUSH_SLEEP      12026XE_TIMER_EVENT                                                 10014SLEEP_TASK                                                          7207BROKER_TO_FLUSH                                             7207PREEMPTIVE_OS_AUTHENTICATIONOPS         57SOS_SCHEDULER_YIELD                                     28 (10 rows affected) It may not be accurate to the millisecond, but it can provide a useful data point, and give an indication whether the wait stats were manually cleared after startup, and if so approximately when. - Guy     Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • VCPASS: Extend your T-SQL Scripting with PowerShell

    - by dbaduck
    Date: November 16, 2011 Extend your T-SQL Scripting with PowerShell Description: I'll be covering some of the different way we can use PowerShell to extend our T-SQL scripting. This session will include a mix of using SMO, .NET classes, and SQLPS to help you understand the power for new scripting technology. At the end we’ll be creating a solution that put together all this techniques. Date/Time: 11/16/2011 1:00 PM - 2:00 PM EST Registration Link: https://www.livemeeting.com/lrs/8000181573/Registration.aspx?pageName=7wzjxg98v9160twm...(read more)

    Read the article

  • SQL Saturday #274 Slovenia

    - by Dejan Sarka
    Yes, here it is SQL Saturday #274 is coming to Slovenia (#sqlsatSlovenia). The event will take place on Saturday, December 21st, at company pixi* labs, Informacijske tehnologije, d.o.o. Poslovna cona A 2 SI-4208 Šencur This company generously offered to host the event. We, the whole Slovenian SQL Server community, are very grateful for this. At this time, a call for speakers went out, and we are already getting the first proposals. We are especially happy that we will get possibility to show the foreign speakers how beautiful Slovenia and especially the capital Ljubljana is in December. Expect a lot of partying right on the streets, no matter of weather. Be prepared, we have slightly weird customs when it comes to drinks. For example, our regular special discount offer is not three drinks for the price of two; it is six drinks for the price of five. If you are a speaker or want to become one, consider sending a proposal. Since most of the sessions will be held in English and you don’t want to speak, consider coming as a visitor as well. Or maybe you would be interested to become a sponsor. Although we are targeting a low budgeted event, any kind of sponsorship is very welcome. Please feel free to contact the organizers if you are interested to become a sponsor: Matija Lah – [email protected], Mladen Prajdic - [email protected], or Dejan Sarka  - [email protected]. Looking forward to see you all!

    Read the article

  • Visualize Disaster

    - by merrillaldrich
    Or, How Mirroring Off-Site Saved my #Bacon My company does most things right. Our management is very supportive, listens and generally funds the technology that makes sense for the best interest of the organization. We have good redundancy, HA and disaster recovery in place that fit our objectives. Still, as they say, bad things can happen to good people. This weekend we did have an outage despite our best efforts, and that’s the reason for this post. It went pretty well for my team, all things considered,...(read more)

    Read the article

  • SQL Server v.Next (Denali) : Metadata enhancements

    - by AaronBertrand
    In my previous job, we had several cases where schema changes or incorrect developer assumptions in the middle tier or application logic would lead to type mismatches. We would have a stored procedure that returns a BIT column, but then change the procedure to have something like CASE WHEN <condition> THEN 1 ELSE 0 END. In this case SQL Server would return an INT as a catch-all, and if .NET was expecting a boolean, BOOM. Wouldn't it be nice if the application could check the result set of the...(read more)

    Read the article

  • Do you know your DNS server?

    - by John Paul Cook
    If you don’t know your DNS server is valid, you need to find out before July 9. The FBI found rogue DNS servers and replaced them with clean, safe DNS servers to protect the public. These safe, clean servers will be turned off on July 9, 2012. If your computer was compromised to use the rogue servers, it will stop resolving DNS queries on July 9 when the clean servers are turned off. The FBI has provided full technical details at http://www.fbi.gov/news/stories/2011/november/malware_110911/DNS-changer-malware.pdf...(read more)

    Read the article

  • SQLPass NomCom election: Why I voted twice

    - by Hugo Kornelis
    Did you already cast your votes for the SQLPass NomCom election ? If not, you really should! Your vote can make a difference, so don’t let it go to waste. The NomCom is the group of people that prepares the elections for the SQLPass Board of Directors. With the current election procedures, their opinion carries a lot of weight. They can reject applications, and the order in which they present candidates can be considered a voting advice. So use care when casting your votes – you are giving a lot...(read more)

    Read the article

  • Solving security issue in PowerPivot for SharePoint and Power View

    - by Marco Russo (SQLBI)
    I just installed a brand new server (well, a virtual machine) with SharePoint 2010 SP1 and SQL Server 2012 RC0, including PowerPivot and Reporting Services / Power View. The server is joined to the domain I use in our development environment. I published a workbook in the PowerPivot Gallery and my user was immediately able to connect, browse and navigate data of the Excel workbook published by SharePoint. Moreover, I was able to open it in Power View. However, other users failed the connection. After...(read more)

    Read the article

  • Down Time

    - by andyleonard
    Introduction Every now and then, everyone needs a break. How do we respond when community leaders need a break? How should we respond? It's Normal People are cyclic animals - humans are diurnal by nature. We eat at regular intervals and are most comfortable when things go according to schedule. This is the lizard brain in action. So it's perfectly normal for community volunteers and leaders to engage in cycles of activity and inactivity in the community. It is, after all, another cycle. We rely on...(read more)

    Read the article

  • Blogging from the PASS Summit : Nov. 8th keynote

    - by AaronBertrand
    Douglas McDowell talks about day 1, the video montage featuring folks here from all over the world, and the fiscal year. The important point I took from this is that PASS is a non-profit committed to investing its revenue back into the community. They are hiring another full-time community evangelist, adding IT resources for online resources like the SQL Saturday site, and further expanding global efforts. He introduces the new board members: Wendy Pastrick, James Rowland-Jones, and Sri Sridharan....(read more)

    Read the article

  • Data Movement and the Decision Matrix

    - by BuckWoody
    Maybe it’s my military background, or maybe I’ve always had this predilection, but I like to use two devices when I need to make a complex decision: A checklist and a decision matrix. I like to use a checklist because it ensures that I remember the big bits of what I need to do, and brings up questions or areas that I didn’t think about when evaluating options for the decision. And the decision matrix – that’s the thing I use to actually lay out those options. It’s simply a spreadsheet-like grid (I use Excel, but paper and pencil works as well) that lays out the requirements or advantages for the decision across the top, and the options I have on the left-hand side. Then in the “cells” I put whether or not that option on the left will meet the requirement in that column. I then simply “weight” each cell to organize the choices by best-fit. The right answer (or answers) will float right to the top. I was asked yesterday about options for moving data in SQL Server to another system. There are just dozens of ways to do this, from bcp to Replication, each with certain advantages and costs. But asking the questions for the top row first helped me show the person that it isn’t a particular technology that is important, it’s laying out those requirements and thinking about which elements are more important than the other. For instance, is it more important to have the data moved all the time, or is it OK if that happens once in a while? Does the data have to move in two directions or just one? All of these will help that answer jump right out. Try it sometime – it’s a great learning exercise, since it will force you to focus on filling out the matrix. The answer is out there, Neo. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • New version of the upgrade slides available

    - by Mike Dietrich
    Sorry for not posting for some weeks now. Our blog admins discovered a bug in the MovableType blog software we are using which prevents direct updates or access to the comments. So if you have commented especially on the VM topic I have read your comments and I’ll approve them as soon as the admin part of MovableType will work again. Besides that Roy and me uploaded a new version of the slides last week: See http://apex.oracle.com/folien and use the keyword “upgrade112” (fill it in into the empty field tagged with Schluesselwort. Thanks for your patience! Mike

    Read the article

  • Other SCOM users at SQLSaturday #65 Vancouver?

    - by merrillaldrich
    After a little hair-graying fun around passport renewal and family logistics, it looks like I'll be at the Vancouver SQLSaturday ! I am pumped. (I was entirely convinced they would call it "SQLSaturd' eh?" and I'm frankly a little disappointed about the name... :-) I'm on the tail end of a three-month deployment of System Center Operations Manager with the SQL management pack - if you are a DBA and SCOM user, too, I'd love to meet you and talk shop at the SQLSaturday event. Please drop me a line...(read more)

    Read the article

  • 24 hours of PASS is back!

    - by Sergio Govoni
    The most important free on-line event on SQL Server and Business Intelligence is back! The 24 Hours of PASS is coming back with a great edition fully based on the new features of SQL Server 2014. What could you aspect from the next PASS Summit? Find it out on June 25, 2014 (12:00 GMT) on 24 Hours of PASS: SQL Server 2014! Register now at this link. No matter from what part of the world you will follow the event, the important thing is to know that it will be 24 hours of continuous training on SQL Server and Business Intelligence.

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • Rounding functions in DAX

    - by Marco Russo (SQLBI)
    Today I prepared a table of the many rounding functions available in DAX (yes, it’s part of the book we’re writing), so that I have a complete schema of the better function to use, depending on the round operation I need to do. Here is the list of functions used and then the results shown for a relevant set of values. FLOOR = FLOOR( Tests[Value], 0.01 ) TRUNC = TRUNC( Tests[Value], 2 ) ROUNDDOWN = ROUNDDOWN( Tests[Value], 2 ) MROUND = MROUND( Tests[Value], 0.01 ) ROUND = ROUND( Tests[Value], 2 )...(read more)

    Read the article

  • Observable Adapter

    - by Roman Schindlauer
    .NET 4.0 introduced a pair of interfaces, IObservable<T> and IObserver<T>, supporting subscriptions to and notifications for push-based sequences. In combination with Reactive Extensions (Rx), these interfaces provide a convenient and uniform way of describing event sources and sinks in .NET. The StreamInsight CTP refresh in November 2009 included an Observable adapter supporting “reactive” event inputs and outputs.   While we continue to believe it enables an important programming model, the Observable adapter was not included in the final (RTM) release of Microsoft StreamInsight 1.0. The release takes a dependency on .NET 3.5 but for timing reasons could not take a dependency on .NET 4.0. Shipping a separate copy of the observable interfaces in StreamInsight – as we did in the CTP refresh – was not a viable option in the RTM release.   Within the next months, we will be shipping another preview of the Observable adapter that targets .NET 4.0. We look forward to gathering your feedback on the new adapter design! We plan to include the Observable adapter implementation into the product in a future release of Microsoft StreamInsight. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • MicroTraining: Managing SSIS Connections–10 Apr 2012 at 10:00 AM EDT!

    - by andyleonard
    I am pleased to announce another free Linchpin People MicroTraining Event! On Tuesday, 10 Apr 2012 at 10:00 AM EDT, I will present Managing SSIS Connections . In this presentation, I will show you several means for managing SSIS connectivity using built-in functionality and a custom trick or two I picked up over the past few years. Want to learn more? It’s free (and no phone number required)! Register today. :{>...(read more)

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Automated backups for Windows Azure SQL Database

    - by Greg Low
    One of the questions that I've often been asked is about how you can backup databases in Windows Azure SQL Database. What we have had access to was the ability to export a database to a BACPAC. A BACPAC is basically just a zip file that contains a bunch of metadata along with a set of bcp files for each of the tables in the database. Each table in the database is exported one after the other, so this does not produce a transactionally-consistent backup at a specific point in time. To get a transactionally-consistent copy, you need a database that isn't in use.The easiest way to get a database that isn't in use is to use CREATE DATABASE AS COPY OF. This creates a new database as a transactionally-consistent copy of the database that you are copying. You can then use the export options to get a consistent BACPAC created.Previously, I've had to automate this process by myself. Given there was also no SQL Agent in Azure, I used a job in my on-premises SQL Server to do this, using a linked server configuration.Now there's a much simpler way. Windows Azure SQL Database now supports an automated export function. On the Configuration tab for the database, you need to enable the Automated Export function. You can configure how often the operation is performed for you, and which storage account will be used for the backups.It's important to consider the cost impacts of this as well. You are charged for how ever many databases are on your server on a given day. So if you enable a daily backup, you will double your database costs. Do not schedule the backups just before midnight UTC, as that could cause you to have three databases each day instead of one.This is a much needed addition to the capabilities. Scott Guthrie also posted about some other notable changes today, including a preview of a new premium offering for SQL Database. In addition to the Web and Business editions, there will now be a Premium edition that has reserved (rather than shared) resources. You can read about it all in Scott's post here: http://weblogs.asp.net/scottgu/archive/2013/07/23/windows-azure-july-updates-sql-database-traffic-manager-autoscale-virtual-machines.aspx

    Read the article

  • Book Review (Book 12) - 20 Master Plots

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for a year. You can read my first book review here, and the entire list is here. The book I chose for May 2012 was:20 Master Plots by Ronald B. Tobias. This is my final book review - at least for this year. I'll explain what I've learned in this book in particular, and in the last twelve months in general. Why I chose this book: Stories and themes are part of software, presenting, and working in teams. This book claims there are only 20 plots, ever. I wanted to find out. What I learned: Probably my most favorite read of the year. Deceptively small, amazingly insightful. The premise is that there are only a few "base" themes, and that once you learn them you can put together an interesting set of stories on most any topic. Yes, the author admits that this number has been different throughout history - some have said 50, others 14, and still others claim only one or two basic plots. This doesn't change the fact that you can build very complex stories from a simple set of circumstances and characters. Be warned - if you read this book it takes away much of the wonder from almost every movie or book you'll read from here on! I loved it. My favorite part is that the author gives you exercises to build stories, right from the start. I've actually used these as the start of a meeting to foster creativity. Amazing stuff. One of my favorite sections of the book deals with plot and story. Plot: The king died, and the queen died. Story: The king died, and the queen died of heartbreak. Add one or two words, and you have the essence of storytelling. A highly recommended read, for all folks of all ages. You'll like it, your spouse will like it, and your kids will like it. I learned to be a better storyteller, and it helped me understand that plots and stories are not just things in books - they are a direct reflection of human nature. That makes me a better manager of myself and others.   And this is the last of the reviews - at least for this year. I probably won't post many more book reviews here, but I will keep up the practice. As a reminder, the goal was to select 12 books that will help you reach your career goals. They don't have to be technical, or even apply directly to your job - but they do need to be books that you mindfully select as getting you closer to what you want to be. Each month, jot down what you learned from the work. And see if it doesn't in fact get you closer to your goals. These readings helped me - I got a promotion this year, and I attribute at least some of that to the things I learned.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >