Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 842/1121 | < Previous Page | 838 839 840 841 842 843 844 845 846 847 848 849  | Next Page >

  • OT: Improbable use for an iPad?

    - by merrillaldrich
    Here's an interesting tidbit: I have noticed an even more pronounced trend toward centralized or virtual workstations lately. Both my wife and I can sit at home, as we are now, at the dining room table and work on our laptops (exciting life, I know!) but both of us are not actually working locally on these machines. We are both remoting into machines at our respective workplaces. Hers is a desktop machine physically located at her desk, while mine is a virtual workstation in my company's data center...(read more)

    Read the article

  • Cell Transitions in Excel 2013 Preview–Fixed

    - by simonsabin
    If you’ve downloaded Excel 2013 and been working with it you may have noticed the new cell transition feature. Not sure why they put it in, it feels a bit like the aero interface which I understand has been dropped in windows 8. What you may have found is that the transition is buggy, Excel hangs, of the transition is jumpy. Well I found the fix on http://answers.microsoft.com/en-us/office/forum/office_home-excel/hardware-acceleration-problem-with-excel-2013/894da202-48c0-4442-a371-955587c1b7c0 For...(read more)

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • TechEd 2010 Day Four: Learning how to help others learn

    - by BuckWoody
    I do quite a few presentations, and teach at the University of Washington, and also teach other classes. But I'm always learning from others how to help others learn. At events like TechEd I have access to some of the best speakers around, so I try to find out what they do that works. I attended a great session by allen White, in which he demonstrated a set of PowerShell scripts. He said that Dan Jones of the Microsoft Manageability team told him while he demonstrated a script he needed to provide some visual way to represent the process. Allen used one of the oldest visualizations around - a flowchart. It was the first time I'd seen one used to illustrate a PowerShell script, and it was very effective. I'm totally stealing the idea. All of us are teachers - we help others on our team understand what we're up to. Make sure you make notes for what you find effective in dealing with you, and then meld that into your own way of teaching. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Windows 8 Consumer Preview and SkyDrive

    - by John Paul Cook
    SkyDrive integrates very nicely with Windows 8. More on that later. First, let’s discuss using Windows 8 x64 on VirtualBox. Since I wanted to test with x64 and didn’t have a Hyper-V server available, I used VirtualBox. The latest version of VirtualBox does list Windows 8 as a guest operating system choice. The first time I attempted the installation, I chose the experimental Direct3D graphics support. That didn’t work well. After what seemed like FOREVER looking at a completely black screen, I decided...(read more)

    Read the article

  • Excel 2013 Data Explorer and GeoFlow make 3-D maps quick and easy

    - by John Paul Cook
    Excel add-ins Data Explorer and GeoFlow work well together, mainly because they just work. Simple, fast, and powerful. I started Excel 2013, used Data Explorer to search for, examine, and then download latitude-longitude data and finally used GeoFlow to plot an interactive 3-D visualization. I didn’t use any fancy Excel commands and the entire process took less than 3 minutes. You can download the GeoFlow preview from here . It can also be used with Office 365. Start by clicking the DATA EXPLORER...(read more)

    Read the article

  • PASS Summit Location follow up - result analysis

    - by simonsabin
    I've had a chance to look at the results directly and it is clear that there is a tough choice. On the one hand people are saying that they prefer to have PASS put their money into chapters and things like 24hrs of PASS rather than an event on the east coast. Whilst at the same time almost 50% more people said they would be more likely to attend an East Coast event than a Seattle event, and 60% more said they would be more likley to attend a US Central region event. Whats more 60% said that the summit should be outside of Seattle every other year with only 19% saying it should always stay in Seattle. So clearly there is a huge desire for a non Seattle event. Looking at the other reasons for keeping in Seattle and the big one being that people want Microsoft speakers. More people think its somewhat important of very important that the conference is in walking distance of the hotels and restaurants. Essentially the Q6 questions show an even balance for normal conference, highlighting that they are prepared to travel, not with the family and they want a well laid out conference. Whats very annoying is that the questions, as people have commented, were biased towards certain answers. For instance there was no option about whether people feel its important to have industry leading speakers, MVPs etc at the conference. Only questions about Microsoft speakers. I know survey writing is very difficult to avoid biasing the answers one way or another. There was also no choice to show peoples preference, would people prefer Microsoft speakers or the summit to be held on the East Coast/Central US. I also find it amazing that people prefer hundres of developers rather than the SQLCAT and CSS teams, surely that indicates another issue about a lack of understanding of what the these teams do. All in all it is clear that people showed they want an event outside of Seattle and don't want PASS to be putting money into that instead of into other community activites. I find it suprising that there appears to have been a huge weighting against certain questions which have prioritised them over the huge desire for a PASS summit outside of Seattle. Lets see where we will be in 2013 or maybe they will rethink 2012 who knows.

    Read the article

  • Did You Know? I'm doing 3 more online seminars with SSWUG!

    - by Kalen Delaney
    As I told you in April , I recorded two more seminars with Stephen Wynkoop, on aspects of Query Processing. The first one will be broadcast on June 30 and the second on August 27. In between, we'll broadcast my Index Internals seminar, on July 23. Workshops can be replayed for up to a week after the broadcast, and you can even buy a DVD of the workshop. You can get more details by clicking on the workshop name, below, or check out the announcement on the SSWUG site at http://www.sswug.org/editorials/default.aspx?id=1948...(read more)

    Read the article

  • Geek City: What gets logged for SELECT INTO operations?

    - by Kalen Delaney
    Last week, I wrote about logging for index rebuild operations. I wanted to publish the result of that testing as soon as I could, because that dealt with a specific question I was trying to answer. However, I actually started out my testing by looking at the logging that was done for a different operation, and ending up generating some new questions for myself. Before I starting testing the index rebuilds, I thought I would just get warmed up by observing the logging for SELECT INTO. I thought I...(read more)

    Read the article

  • String or binary data would be truncated.

    - by Derek Dieter
    This error message is relatively straight forward. The way it normally happens is when you are trying to insert data from a table that contains values that have larger data lengths than the table you are trying to insert into. An example of this would be trying to insert data from a permanent table, into [...]

    Read the article

  • SSRS 2008 R2 KPIs with bullet graphs

    Key Performance Indicators are typically displayed in a scorecard with stop light indicators, which are either red, amber or green light icons. The limitation for these kind of indicators is that you can see the actual and target values in two different fields as well as see the status of the KPI in red, amber or green color. If the user wants to figure out the thresholds associated with the KPI, these values are generally not visible. Further, representing the threshold values in the scorecard itself defeats the purpose of the scorecard. The scorecard should display the KPI's status in the most summarized form and use a minimal amount of space on the dashboard. In this tip we would look at how to address this issue.

    Read the article

  • Help make the next Summit even better

    - by Bill Graziano
    After the Summit we send out a survey to capture feedback.  We ask a consistent set of questions so we get good year over year results.  I’ve watched blog posts and email threads with ideas for a better Summit.  I got to sit with Denny and crew again on Saturday night and talk about what worked and what didn’t.  We’d like to capture those ideas in a way that you can vote on what’s important to you.  Please take a second and visit http://feedback.sqlpass.org/.  You can make suggestions, vote on the ideas already posted and add your own comments.  Help PASS make next year’s Summit “The Best Summit Ever!”

    Read the article

  • DTLoggedExec 1.1.2008.4 Released!

    - by Davide Mauri
    Today I've relased the latest version of my DTExec replacement tool, DTLoggedExec. The main changes are the following: Used a new strategy for version numbers. Now it will follow the following pattern Major.Minor.TargetSQLServerVersion.Revision Added support for Auto Configurations Fixed a bug that reported incorrect number of errors and warnings to Log Providers Fixed a buf that prevented correct casting of values when using /Set and /Param options Errors and Warnings are now counted more precisely. Updated database and log import scripts to categorize logs by projects and sections. E.g.: Project: MyBIProject; Sections: Staging, Datawarehouse Removed unused report stored procedures from database Updated Samples: 12 samples are now available to show ALL DTLoggedExec features From this version only SSIS 2008 will be supported http://dtloggedexec.codeplex.com/releases/view/62218  It useful to say something more on a couple of specific points: From this version only SSIS 2008 will be supportedYes, Integration Services 2005 are not supported anymore. The latest version capable of running SSIS 2005 Packages is the 1.0.0.2. Updated database and log import scripts to categorize logs by projects and sectionsWhen you import a log file, you can now assign it to a Project and to a Section of that project. In this way it's easier to gather statistical information for an entire project or a subsection of it. This also allows to store logged data of package belonging to different projects in the same database. For example:  Updated SamplesA complete set of samples that shows how to use all DTLoggedExec features are now shipped with the product. Enjoy! Added support for Auto ConfigurationsThis point will have a post on its own, since it's quite important and is by far the biggest new feature introduced in this release. To explain it in a few words, I can just say that you don't need to waste time with complex DTS configuration files or options, since a package will configure itself automatically. You just need to write a single statement as a parameter for DTLoggedExec. This feature can simplify deployment *a lot* :)   I the next days I'll write the mentioned post on Auto-Configurations and i'll update the documentation available on theDTLoggedExec website:   http://dtloggedexec.davidemauri.it/MainPage.ashx

    Read the article

  • mysql single database relocation

    - by asdmin
    I would like to know if it's possible to operate different databases on different filesystem locations. Background: we are a hosting service, which hosts mysql, web, and smtp to it's customer, but all our services (sql, smtp, http) are located in a different place. We are going to assign a single logical volume to a customer, which will accommodate the customer's mailing, weppages and (hopefully) sql database. Web pages and mailing are already covered, but I am not able to find a configuration setting which would enable me to specify the location of a database (the directory where mysql stores the DB). Let me please highlight, the target here is to relocate different databases to different locations in the filesystem, not moving them from a single place to an another (single) place. Also please do not bother answering with soft and hard symbolic links. ;) Thanks

    Read the article

  • General High-Level Assessment

    - by tcarper
    Guys and Gals, I've been tasked with a doozy of an assignment. The objective is something akin to "laying of hands" on several database servers which work in concert to provide data to various Web, Client-Server and Tablet-Sync'd distributed Client-Server programs. More specifically, I've been asked to come up with a "Maintenance Plan" which includes recommendations for future work to improve these machines' performance/reliability/security/etc. Might there be some good articles on teh interwebs ya'll could point me towards which would give me some good basis to start? Articles describing "These are the top 4 overarching categories and this is how you should proceed when drilling down on each of them" sort-of-thing would be fabulous. The Databases are all SQL 2005, however the compatibility level is 80 and they were originally created with ERwin based on SQL 6.5. The OSs are all Windows Server 2003. Thanks all! Tim

    Read the article

  • Skype, add-on applications, UAC and "Unable to respond"

    - by Greg Low
    Just posting this blog tonight hoping it might save someone else a bunch of time. For call recording on Skype, I use a program called Pamela. Lately, when I'd first installed it, it would work fine. Later, however, it would come up and say: "Another application (Pamela.exe) is attempting to access Skype, but we are unable to respond". You just have to love these sorts of messages that don't give you the slightest clue about what the problem is. While I saw the problem with Pamela, it can happen with...(read more)

    Read the article

  • DTLoggedExec 1.0.0.2 Released

    - by Davide Mauri
    These last days have been full of work and the next days, up until the end of july, will follow the same ultra-busy scheme. This makes the improvement of DTLoggedExec a little bit slower than what I desire, but nonetheless Friday I’ve been able to relase an updated version of the tool that fixes a bug and add a very convenient option to make even more straightforward the creationg of execution logs: [bugfix] Fixed a bug that prevented loading packages from SSIS Package Store [new] Added support for {filename} placeholder in both Data Flow Profiling and CSV Log Provider The added feature allow to generate DataFlow profile logs and CSV logs that has the same name of the package that generated them, es: DTLoggedExec.exec /FILE:”MyPackage.dtsx” /LPA:"FILE=C:\Log\{filename}_{date}_{time}.dtsCSVLog" Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • PASS Summit 2012 Women In Technology Luncheon

    - by AllenMWhite
    My final stint at the Summit Blogger's Table(tm) is for the annual WIT luncheon. I do appreciate the honor that PASS conferred on me by inviting me to the "table" for the event, it's been a lot of fun (even if there were some moments that weren't.) Newly-elected board member Wendy Pastrick is the MC for this year's luncheon, and the panel consists of Stefanie Higgins, Denise McInerny, Kevin Kline, Jen Stirrup and Kendra Little. I'm pleased to say that I know each one of them except Stefanie Higgins,...(read more)

    Read the article

  • Terrible Performance with SATA Drives on Dell PowerEdge, steps to troubleshoot?

    - by Tom
    I had asked this question earlier and the question went missing so here it is again. Bought a DELL Poweredge 2950 to use as in-house QA Server. Disk performance is beyond terrible, 1000-4000 ms response time on the drive with our SQL Server database .mdf. Sql Server disk queue upwards of 300 at times. I'm a software guy, can anyone help me with steps to determine the issue? I don't know what RAID controller it has, how can I determine that? I'm speculating it could be BIOS issue. Perhaps the server used to have another kind of drive in it and when I added SATA the ??? buffer size is wrong??? Perhaps I chose wrong options (chose defaults) when setting up the RAID 1 arrays? I thought RAID 1 was a performance array?

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Rules of Holes #4 -Do You Have the BIG Picture?

    - by ArnieRowland
    Some folks decry the concept of being in a 'Hole'. For them, there is no such thing as 'Technical Debt', no such thing as maintaining weak and wobbly legacy code, no such thing as bad designs, no such thing as under-skilled or poorly performing co-workers, no such thing as 'fighting fires', or no such thing as management that doesn't share the corporate vision. They just go to work and do their job, keep their head down, and do whatever is required. Mostly. Until the day they are swallowed by the...(read more)

    Read the article

  • Thinking in DAX (#powerpivot and #bism)

    - by Marco Russo (SQLBI)
    Last week Alberto published an interesting post about Counting Products in the Current Status with PowerPivot . Starting from a question raised from a reader, Alberto described how to solve a common issue (let me know the “current status” of each item at a given point in time starting from a transactions table) by using a single DAX formula. I suggest you to read his post to understand the technical details of that. What is inspiring of this example is that we can look at Vertipaq and DAX from several...(read more)

    Read the article

  • PowerPivot: Putting two stocks on the same PivotChart

    - by AlbertoFerrari
    In a previous post , I have used a stock exchange scenario to speak about how to compute moving averages in a complex scenario. Playing with the same scenario, I felt the need to compare two stocks on the same chart, choosing the stock names with a slicer. As always, a picture is worth a thousand words, the final result I want to achieve is something like this, where I am comparing Microsoft over Apple during the last 10 years. It is clear that I am not going to comment in any way why traders seem...(read more)

    Read the article

  • Education and Career Resources from Microsoft and the Community

    - by KKline
    Sometimes I'm timely in getting the news out on useful resources. And, other times, I'm a bit slower on the draw. As I told my friends back at New Year's Day, "As an official member of the Procrastinators Club, welcome to 2008!" On the other hand, it's always good to remind folks of great resources that are still available and on the shelf. Why? Well, the Internet hits us with such a deluge of constantly new material, that we often forget about the old(ish) stuff that's still really useful. Darth...(read more)

    Read the article

< Previous Page | 838 839 840 841 842 843 844 845 846 847 848 849  | Next Page >