Search Results

Search found 10594 results on 424 pages for 'umbraco blog'.

Page 135/424 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Chapter 7–Enforced Data Protection

    - by drsql
    As the book progresses, I find myself veering from the original stated outline quite a bit, because as I teach about this more (and I am teaching a daylong db design class in August at http://www.sqlsolstice.com/ … shameless plug, but it is on topic :) I start to find that a given order works better. Originally I had slated myself to talk more about modeling here for three chapters, then get back to the more implementation topics to finish out the book, but now I am going to keep plugging through...(read more)

    Read the article

  • Rules of Holes #7: Some Will Look Down on You.

    - by ArnieRowland
    I've been extoling the Rules of Holes, hoping to give you both courage to get out of your Hole, and solace for having allowed yourself to get in a Hole in the first place. How about the others, the folks that see that you are up to your neck, the folks that could guide you out, the folks that are secretly glad that it is you down in the Hole instead of them. So this brings us to Rules of Holes #7: When you are in a hole, some will look down on you. Only a few will offer their hand, and of those,...(read more)

    Read the article

  • An honor to be among the SQL Server MVPs

    - by dbaduck
    I just found out last night that I was officially awarded SQL MVP.  I am honored to be among those that I respect and admire.  I don’t contribute for the recognition and as a former MVP Lead I know the caliber of those that contribute to the SQL Server community, and even those that are not MVP, I am grateful for those of you who have helped me get where I am in SQL Server land. I just wanted to say thanks to the SQL Community for the support and also to Microsoft for this award. Here’s...(read more)

    Read the article

  • Rules of Holes #3: A Better Shovel is NOT the Answer!

    - by ArnieRowland
    You stopped digging. You looked around and saw that you were still in the Hole. You needed to get out. AHA! Problem solved, you thought. You'll just get a better and more efficient shovel! I regret to tell you that the Third Rule of Holes applies: Switching to a more efficient shovel is unlikely to help you get out of the Hole . Yes, your resumed digging may be faster, more directed, and even well planned and articulated. But you will still be in the Hole, and digging. And that's just not the solution....(read more)

    Read the article

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

  • TechEd 2014 Day 2

    - by John Paul Cook
    Today people asked me about backing up older versions of SQL Server to Azure. Older versions back to SQL Server 2005 can be easily backed up to Azure Storage by installing Microsoft SQL Server Backup to Windows Azure Tool. It installs a service of the same name that applies rules to SQL Server backups. You can tell the tool to backup or encrypt your SQL Server backups. You can have it automatically upload your backups to Azure Storage. Even if you don’t want to upload your backups to Azure, you might...(read more)

    Read the article

  • New Master Data Services Training Available

    - by mattande
    [posted by Suzanne Selhorn, Technical Writer on the MDS team] Some new self-paced training is now available on the Microsoft Download Center. To take advantage of this training, you should have a working installation of MDS with sample data already loaded. 01 Introduction http://download.microsoft.com/download/5/9/F/59F1639E-EF57-4915-8848-EF1DC2157EBB/01 Introduction.pdf This lesson provides an overview of MDS. 02 MDS Environment http://download.microsoft.com/download/5/9/F/59F1639E-EF57-4915-8848-EF1DC2157EBB/02...(read more)

    Read the article

  • Speaker Prep Tip: Use the AV Studio Built into that Laptop

    - by merrillaldrich
    Over at erinstellato.com there is a great post this week about tips for new presenters. Ms. Stellato suggests, insightfully, that we record ourselves, which is really a fantastic piece of advice. What’s extra-cool is that today you don’t need any special equipment or expensive software to do just that. This week I “filmed” two run-throughs of my talk for SQL Saturday tomorrow. For me, the timing is the hardest thing – figuring out how much content I can really present in the time allowed without...(read more)

    Read the article

  • Supermicro motherboards and systems

    - by jchang
    I used to buy SuperMicro exclusively for my own lab. SuperMicro always had a deep lineup of motherboards with almost every conceivable variation. In particular, they had the maximum memory and IO configuration that is desired for database servers. But from around 2006, I became too lazy to source the additional components necessary to complete the system, and switched to Dell PowerEdge Tower servers. Now, I may reconsider as neither Dell or HP are offering the right combination of PCI-E slots. Nor...(read more)

    Read the article

  • Set and Verify the Retention Value for Change Data Capture

    - by AllenMWhite
    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Extended Events demos on Microsoft Virtual Academy

    - by extended_events
    I had an opportunity recently to contribute a presentation to the Microsoft Virtual Academy as part of the Mission Critical Confidence using SQL Server 2012 course offering. The MVA offers you a myriad of free training opportunities, so I encourage anyone who is interested in expanding your knowledge to take advantage of this offering. For those of you who don’t want to invest the time to go through the whole course, you can access my presentation here. I cover the following topics: Integration of Extended Events into AlwaysOn troubleshooting. Troubleshooting Login failures using client/server correlation. Troubleshooting query performance issues using client/server correlation. I’m not sure how long content is made available on MVA, I got the impression that it would be removed as some point in the future, but should be there for at lease several months. - Mike

    Read the article

  • Utility Queries–Structure of Tables with Identity Column

    - by drsql
    I have been doing a presentation on sequences of late (last planned version of that presentation was last week, but should be able to get the gist of things from the slides and the code posted here on my presentation page), and as part of that, I started writing some queries to interrogate the structure of tables. I started with tables using an identity column for some purpose because they are considerably easier to do than sequences, specifically because the limitations of identity columns make...(read more)

    Read the article

  • StreamInsight will not push feature releases through Microsoft Update going forward

    - by Roman Schindlauer
    Until now, we've released StreamInsight through the Microsoft Download Center, and also released it out through Microsoft Update. Going forward, we will only release new StreamInsight versions through the Microsoft Download Center and only use MU to release service packs and security fixes (should any be needed). As a result of this decision, we are pulling off the recent StreamInsight 2.1 release from MU; this release is still available in Download Center. Don’t worry: there’s nothing wrong with the versions we’ve shipped in MU, we’ve just adjusted how we use MU. There is no action necessary from our customers as a result of this change, and we are not rolling back any changes to your current installation, so if you have installed StreamInsight 2.1 recently through the Microsoft Update, they will still work fine. Regards, The StreamInsight Team

    Read the article

  • How to start with PowerPivot for Excel

    - by Marco Russo (SQLBI)
    Now that Office 2010 has been released, many people will start looking for resources to start learning PowerPivot. Of course, the book I’m writing will be helpful when it will be published (September 2010), but you can also start with some online content on Microsoft sites. First of all, this is the web site dedicated to PowerPivot: http://www.powerpivot.com/ It contains several videos and demos and it’s also possible to use a Virtual Lab without installing Office 2010 on your PC. Then, there is...(read more)

    Read the article

  • Suggestion: ALLFILES option for RESTORE

    - by Greg Low
    The default action when performing a backup is to append to the backup file yet the default action when restoring a backup is to restore just the first file.I constantly come across customer situations where they are puzzled that they seem to have lost data after they have completed a restore. Invariably, it's just that they haven't restored all the backups contained within a single OS file. This happens most commonly with log backups but also happens when they have not restored the most recent database backup file.It is not trivial to achieve this within simple T-SQL scripts, when the number of backup files within the OS file is unknown. It really should be.I'd like to see a FILES=ALLFILES option on the RESTORE command. For RESTORE DATABASE, it should restore the most recent database backup plus any subsequent log files. For RESTORE LOG (which is the most important missing option), it should just restore all relevant log backups that are contained.If you agree, you know what to do: please vote:  https://connect.microsoft.com/SQLServer/feedback/details/769204/option-to-restore-all-backups-files-within-a-media-setAlternately, how would you write a T-SQL command to restore all log backups within a single OS file where the number of files is unknown? Would love to hear creative solutions because all the ones that I think of are pretty messy and need dynamic SQL. 

    Read the article

  • Browser problem for background-size property [migrated]

    - by Sangram
    I am using one picture as my background of header of my blog. CSS i have used is #header-wrapper { height:125px; padding: 0px; margin: 0; background: url("http://3.bp.blogspot.com/_lxBSX0YJV58/TOspWPI1r-I/AAAAAAAAA34/uw872WFS3ME/s1600/headerbg.jpg") top center no-repeat; background-size: 1120px 124px; } original width of an image is 990 px and i made it 1120px using property background-size: 1120px 124px; It looks okay in firefox 4 and Opera 11 but doesn't work in IE 7, Palemoon etc. image size does not increases and remains 990 px. You can check my blog HERE Any help...how can i make it compatible with all browsers ?

    Read the article

  • Sampling SQL server batch activity

    - by extended_events
    Recently I was troubleshooting a performance issue on an internal tracking workload and needed to collect some very low level events over a period of 3-4 hours.  During analysis of the data I found that a common pattern I was using was to find a batch with a duration that was longer than average and follow all the events it produced.  This pattern got me thinking that I was discarding a substantial amount of event data that had been collected, and that it would be great to be able to reduce the collection overhead on the server if I could still get all activity from some batches. In the past I’ve used a sampling technique based on the counter predicate to build a baseline of overall activity (see Mikes post here).  This isn’t exactly what I want though as there would certainly be events from a particular batch that wouldn’t pass the predicate.  What I need is a way to identify streams of work and select say one in ten of them to watch, and sql server provides just such a mechanism: session_id.  Session_id is a server assigned integer that is bound to a connection at login and lasts until logout.  So by combining the session_id predicate source and the divides_by_uint64 predicate comparator we can limit collection, and still get all the events in batches for investigation. CREATE EVENT SESSION session_10_percent ON SERVER ADD EVENT sqlserver.sql_statement_starting(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info_external (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlserver.sql_statement_completed(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))) ADD TARGET ring_buffer WITH (MAX_DISPATCH_LATENCY=30 SECONDS,TRACK_CAUSALITY=ON) GO   There we go; event collection is reduced while still providing enough information to find the root of the problem.  By the way the performance issue turned out to be an IO issue, and the session definition above was more than enough to show long waits on PAGEIOLATCH*.        

    Read the article

  • LAG function – practical use and comparison to old syntax

    - by Michael Zilberstein
    Recently I had to analyze huge trace – 46GB of trc files. Looping over files I loaded them into trace table using fn_trace_gettable function and filters I could use in order to filter out irrelevant data. I ended up with 6.5 million rows table, total of 7.4GB in size. It contained RowNum column which was defined as identity, primary key, clustered. One of the first things I detected was that although time difference between first and last events in the trace was 10 hours, total duration of all sql...(read more)

    Read the article

  • TechEd 2014 Day 2

    - by John Paul Cook
    Today people asked me about backing up older versions of SQL Server to Azure. Older versions back to SQL Server 2005 can be easily backed up to Azure Storage by installing Microsoft SQL Server Backup to Windows Azure Tool. It installs a service of the same name that applies rules to SQL Server backups. You can tell the tool to backup or encrypt your SQL Server backups. You can have it automatically upload your backups to Azure Storage. Even if you don’t want to upload your backups to Azure, you might...(read more)

    Read the article

  • Never Bet Against the Impossible

    - by BuckWoody
    My uncle used to say “If a man tells you that his car squirts milk in his eye when you lift the hood, don’t bet against that. You’ll end up with milk in your eye.” My friend Allen White tells me this is taken from a play (and was said about playing cards), but I think the sentiment holds, even in database work. I mentioned the other day that you should allow the other person to talk and actively listen before you propose a solution. Well, I saw a consultant “bet against the impossible”  the other day – and it bit her. She explained to the person telling her the problem that the situation simply couldn’t exist that way, and he proceeded to show her that it did. She got silent, typed a few things, muttered a little, and then said “well, must be something else.” She just couldn’t admit she was wrong. So don’t go there. If someone explains a problem to you with their database, listen with purpose, and then explore the troubleshooting steps you know to find the problem. But keep your absolutes to yourself. In fact, I have a friend that has recently sent me one of those. He connects to a system with SQL Server Management Studio (SSMS) version 2008 (if I recall correctly) and it shows a certain version number of the target system in the connection tab. Then he connects to it using SSMS 2008 R2 and gets a different number. Now, as far as I know, we didn’t change the connection string information, and that’s provided by the target system, so this is impossible. But I won’t tell him that. Not until I look a little more. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • IE 9 RC maybe possible to release on 10 February

    - by anirudha
    this is not a exclamatory we all know about that they always postponed their time for product release. I not know what is means of it. maybe it’s trick microsoft use to make their software popular. but sometime it’s give bad impression to user. On 2009 Microsoft put a widget [ countdown ] widget for launching Visual studio 2010. who used by many MSDN blogger. Somasagar are one of them who put the widget on their blog that show “How much time after Visual studio goes released”. but after post ponding the date I not know where widget was gone. site are down who provide the widget. they use same trick they postponed their  date 20 march to 12 April to release the Visual studio. well wait something more and next time never  believe that it’s really gone to release on certain date they show you on blog.

    Read the article

  • Windows Not Sleeping All Night

    - by John Paul Cook
    Having a computer wake up when you don’t want it to wastes electricity and drains the battery on mobile devices. My desktop had been waking up at night, so I assumed it was some network traffic on my home network. I unchecked Allow this device to wake the computer on my network adapters . Figure 1. Network adapter Power Management tab. That didn’t solve the problem. I included the screen capture in Figure 1 because it could be part of the solution for someone else. To identify the root cause instead...(read more)

    Read the article

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >