Search Results

Search found 100454 results on 4019 pages for 'sql server 2011'.

Page 361/4019 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Update KB951847 (3.5 SP1 and Family Update) fails on Windows Server 2003 repeatedly

    - by Ducain
    We have a Win2K3 server that is perpetually stuck on the following update: Microsoft .NET Framework 3.5 Service Pack 1 and .NET Framework 3.5 Family Update for .NET versions 2.0 through 3.5 (KB951847) x86 It fails every time, and I'm at a loss on what to do about it. WHAT I'VE TRIED: Saw an article that said to turn off the update service, rename C:\Windows\SoftareDistribution\ and then restart the update service. Did this, ran the update, and it failed on this same one. I tried to run the update manually at C:\Windows\SoftwareDistribution\Downloads\Install\ and again it fails. Says, "None of the products that are addressed by this software update are installed on this computer. Click Cancel to exit setup." Because of this issue, none of the previous updates will work either. I will give one large pink-glazed donut with sprinkles to anyone who can help me slay this beast.

    Read the article

  • Microsoft Semantic Search

    - by sqlartist
    This is something I really get excitied about - Microsoft Semantic Search. There is an excellent PDC demo and presentation here - http://microsoftpdc.com/Sessions/SVR32 . Intially I didn't think this was SQL related but I read that it may be included in future versions of SQL Server. For many years I have written linguistic, semantic, text extraction & clustering code in SQL Server for fun - now finally I can throw that all away and use this tool :) It reminds me of the Microsoft Research...(read more)

    Read the article

  • Getting FC11 to run under VMware Server, converted from physical machine

    - by Kristian
    I have a FC11 installation that I have converted to a VMware disk image to run om my VMware Server. I converted it with qemu-img, as the VMware Converter software apparently only converts Linux hosts to VMware Infrastructure servers. The disk image boots fine (grub is loaded and boots the kernel) but it seems like the disk is not found by the kernel, and the boot process stalls. Hotplugging USB devices work (the kernel prints debug information) and I'm able to press keys (Ctrl-Alt-Delete for instance). The VMware guest OS is set to RedHat Enterprise Linux 5 (32 bit), and I have tried both the LSI Logic, LSI Logic SAS and VMware Accelerated SCSI SCSI controllers, to no avail. I'm able to boot an installer disk and get into rescue mode and mount the filesystem, so my question is, what do I need to do to the guest kernel / initrd image to make it recognize the virtual disk?

    Read the article

  • Windows Storage Server 2003 R2 RDP Disconnection

    - by Antitribu
    I am unable to remote desktop to our Windows Storage Server 2003 R2 machine. There are a few people about with similar issues but no answers around. Symantec End Point is installed but no network protection. Using rdesktop in Linux I receive: ERROR: send: Connection reset by peer NOT IMPLEMENTED: PDU 12 ERROR: Connection closed Where the "PDU XX" number changes on each connection. In windows using the latest mstsc I receive: The connection was lost due to a network error. Try connecting again. If the problem persists, contact your network Administrator. Any ideas?

    Read the article

  • Server freeze (Disk I/O possibly)

    - by user973917
    I have a Windows Server 2008 machine that is resyncing disks after a powerloss. The issue is that the system becomes unresponsive after about 10 minutes. We've checked with resource monitor and found that the CPU's aren't maxed; but the disk I/O is well over 250MB/s. We've attempted copying data from 1 disk to another; bypassing syncing of disks and this too causes the machine to freeze after about 10 minutes of copying data. I have attempted to let the machine resync the disks for a few days with the machine on in this "frozen" state. By frozen I mean that NOTHING works on the machine, it's completely unresponsive; no mouse movement, etc. I want to know how I would go about definitively checking if this is Disk I/O that is freezing the system. I know that disk I/O can freeze a system; but what can I use to run tests to be sure?

    Read the article

  • VirtualBox / Ubuntu 10.04 Server

    - by SevenCentral
    I created a virtual machine using VirtualBox 3.2 and Ubuntu Server 10.04 x64. Everything works fine. I logged in. I mounted the VBoxGuestAdditions.iso file. However, I can't for the life of me find where in the operating system this was done so I can get the additions installed. I've tried /media, /cdrom, /mnt, etc... I'm obviously new at this. Can someone help figure out how to install the guest additions and how to locate the mounted file? Thanks.

    Read the article

  • The Red Gate Guide to SQL Server Team based Development Free e-book

    - by Mladen Prajdic
    After about 6 months of work, the new book I've coauthored with Grant Fritchey (Blog|Twitter), Phil Factor (Blog|Twitter) and Alex Kuznetsov (Blog|Twitter) is out. They're all smart folks I talk to online and this book is packed with good ideas backed by years of experience. The book contains a good deal of information about things you need to think of when doing any kind of multi person database development. Although it's meant for SQL Server, the principles can be applied to any database platform out there. In the book you will find information on: writing readable code, documenting code, source control and change management, deploying code between environments, unit testing, reusing code, searching and refactoring your code base. I've written chapter 5 about Database testing and chapter 11 about SQL Refactoring. In the database testing chapter (chapter 5) I cover why you should test your database, why it is a good idea to have a database access interface composed of stored procedures, views and user defined functions, what and how to test. I talk about how there are many testing methods like black and white box testing, unit and integration testing, error and stress testing and why and how you should do all those. Sometimes you have to convince management to go for testing in the development lifecycle so I give some pointers and tips how to do that. Testing databases is a bit different from testing object oriented code in a way that to have independent unit tests you need to rollback your code after each test. The chapter shows you ways to do this and also how to avoid it. At the end I show how to test various database objects and how to test access to them. In the SQL Refactoring chapter (chapter 11) I cover why refactor and where to even begin refactoring. I also who you a way to achieve a set based mindset to solve SQL problems which is crucial to good SQL set based programming and a few commonly seen problems to refactor. These problems include: using functions on columns in the where clause, SELECT * problems, long stored procedure with many input parameters, one subquery per condition in the select statement, cursors are good for anything problem, using too large data types everywhere and using your data in code for business logic anti-pattern. You can read more about it and download it here: The Red Gate Guide to SQL Server Team-based Development Hope you like it and send me feedback if you wish too.

    Read the article

  • Replace the broken file copying UI in Windows 2008 Server 64-bit Explorer

    - by cbp
    Does anyone know a good GUI alternative for file copying on a Windows 2008 Server 64 bit edition. The built-in GUI has a hopeless interface and is bug-riddled which really hinders the ability to get things done safely. For example, often when moving a directory with subfolders, the directory and its subfolders will still remain, empty and not deleted. I've been through many of the common file copier and Windows Explorer alternatives, but either they flat-out do not work on a 64 bit/W2k8 machine or they do not actually fully replace the file copier.

    Read the article

  • How Parallelism Works in SQL Server

    - by Paul White
    You might have noticed that January was a quiet blogging month for me.  Part of the reason was that I was working on a series of articles for Simple Talk, examining how parallel query execution really works.  The first part is published today at: http://www.simple-talk.com/sql/learn-sql-server/understanding-and-using-parallelism-in-sql-server/ . This introductory piece is not quite as deeply technical as my SQLblog posts tend to be, but I hope there be enough interesting material to make...(read more)

    Read the article

  • SQL Server Data Tools–BI for Visual Studio 2013 Re-released

    - by Greg Low
    Customers used to complain that the tooling for creating BI projects (Analysis Services MD and Tabular, Reporting Services, and Integration services) has been based on earlier versions of Visual Studio than the ones they were using for their other work in Visual Studio (such as C#, VB, and ASP.NET projects). To alleviate that problem, the shipment of those tools has been decoupled from the shipment of the SQL Server product. In SQL Server 2014, the BI tooling isn’t even included in the released version of SQL Server. This allows the team to keep up-to-date with the releases of Visual Studio. A little while back, I was really pleased to see that the Visual Studio 2013 update for SSDT-BI (SQL Server Data Tools for Business Intelligence) had been released. Unfortunately, they then had to be withdrawn. The good news is that they’re back and you can get the latest version from here: http://www.microsoft.com/en-us/download/details.aspx?id=42313

    Read the article

  • Windows server 2008 R2 IIS7 file permissions

    - by StealthRT
    Hey all i am trying to figure out why i can not access a index.php file from within the wwwroot/mollify/backend directory. It keeps coming up with this: Server Error 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've given all the permissions (Full control) to the wwwroot directory i could think of (IUSR, Guest, GUESTS, IIS_IUSRS, Users, Administrators, NETWORK, NETWORK SERVICE, SYSTEM, CREATOR OWNER & Everyone). I also added index.php to the "Default Document" under my website settings in IIS 7 manager. What else am i missing? Thanks! David

    Read the article

  • Scheduling Jobs in SQL Server Express

    As we all know SQL Server 2005 Express is a very powerful free edition of SQL Server 2005. However it does not contain SQL Server Agent service. Because of this scheduling jobs is not possible. So if we want to do this we have to install a free or commercial 3rd party product. This usually isn't allowed due to the security policies of many hosting companies and thus presents a problem. Maybe we want to schedule daily backups, database reindexing, statistics updating, etc. This is why I wanted to have a solution based only on SQL Server 2005 Express and not dependent on the hosting company. And of course there is one based on our old friend the Service Broker.

    Read the article

  • The results are in: I'm an okay speaker.

    - by AaronBertrand
    Last weekend I spoke at SQL Saturday #60 in Cleveland, Ohio. I had a great time catching up with some existing friends and colleagues, and met a bunch of new people too. I presented two sessions: What's New in Denali, and T-SQL: Bad Habits to Kick. Yesterday the organizers passed along the scanned-in speaker evaluations (this was the first SQL Saturday event where I found folks to be quite motivated to fill out the forms, since it was how they drew the door prizes). And being the ADD person I am,...(read more)

    Read the article

  • Dreamweaver Files uploaded to Win 2008 server cause login prompt

    - by Lil
    I have a customer who uses a 4 year old version of Dreamweaver to edit her webpages. My hosting reseller account is with a company that uses Windows Server 2008. Every time my customer edits a page and uploads it, I have to set the permissions for that file to be readable, manually from the site's control panel. The customer is furious with me because her files cause the login prompt. I am able to upload files myself that remain readable to the site with both Filezilla and with Frontpage. I am assuming that her Dreamweaver settings are the cause of the problem but I don't have that program myself and don't know what to advise her. Any suggestions?

    Read the article

  • One-way forest trust between geographically distributed forests using Server 2008 R2

    - by bwerks
    Hi all, I'm planning out a joinder between two domains, as would take place with contracting companies. Forests A and B exist in distant sites, and there is to be a one-way forest trust so that domain users in Forest A can be authenticated on machines in Forest B. In order to facilitate this, each forest's domain controller must be able to contact each other in order to set up & confirm the trust, but my question is what underlying networking magic must take place beneath it. So far the prevailing approach has been to maintain a VPN connection between the two sites, but the technet documentation seems to indicate that DNS forwarding may be the way to go. Is this the case? Furthermore, if DNS will suffice, does that mean that there must be a server running DNS on boundary servers in each domain so that they can be reached from across the internet? How must they be configured? Thanks!

    Read the article

  • Windows Server 2003 Standby/Sleep Results in Cold Boot when Resuming

    - by Simon Chadwick
    I'm running Windows Server 2003 sp2 on a Dell Inspiron e1705. There has never been a problem going to standby/sleep mode (by closing the laptop lid, or from the Start menu), and then resuming later. Today I did a Windows Update to install the latest patches and fixes. I don't know if there is a correlation, but now when I resume from standby/sleep mode, the machine does a cold boot. The AC adapter is connected. The battery is 2 months old, and all its LEDs light up when pressing its test button. The Power Management shows it at "99% (charging)" all the time. There is nothing suspicious in any of the event logs. Any ideas?

    Read the article

  • Server room kit?

    - by Bill Weiss
    I feel like this is a question I've seen on here before, but some searching didn't do me any good. This looks similar, but I'm looking for stuff I leave there, not what's in my go-bag. What would you say is indispensable equipment in your server room? I've inherited one that's a bit light on stuff (except for servers, those are in there). We're in the single digits of racks, if that matters. I'm thinking of things like: Cable labeler Ethernet tester (copper at least, fibre if you need) ... ? Community wiki, because, really. [Edit] I suppose it's important to say that it's a colo facility, kind of far from the office. No food, water, etc. :(

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Mysql-server-5.5 broken after update

    - by WalrusTusks
    Using Ubuntu 12.04, desktop, I had LAMP installed on my computer, and was using it as a server. However, after doing the upgrades one day, apt-get throws an error that mysql-server can't be configured, as it depends on another package: jay@rumbles:~$ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: mysql-server-5.5 : Depends: mysql-server-core-5.5 (= 5.5.22-0ubuntu1) but 5.5.24-0ubuntu0.12.04.1 is installed E: Unmet dependencies. Try using -f ` jay@rumbles:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. 2 not fully installed or removed. Need to get 0 B/8,821 kB of archives. After this operation, 2,048 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of mysql-server-5.5: mysql-server-5.5 depends on mysql-server-core-5.5 (= 5.5.22-0ubuntu1); however: Version of mysql-server-core-5.5 on system is 5.5.24-0ubuntu0.12.04.1. dpkg: error processing mysql-server-5.5 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) How can I fix this?

    Read the article

  • Server 2012 GPO: PowerShell Script on Computer Startup not running

    - by Alex
    I've got a couple of Server 2012 instances on Amazon EC2 and I'm in the process of setting up the GPOs. All of the settings of the GPOs are being applied fine, except none of the PowerShell scripts specified on computer startup are actually being executed. The scripts are sitting on a UNC share which has Authenticated Users applied to it with full permissions. I'm assuming it probably has something to do with the Execution Policy, but I'm not sure how to automatically bypass it. I could just go in each instance and bypass the Execution Policy, but that's obviously not a good idea, plus I'm eventually going to connect Windows 7 computers that will be running the same scripts. How can I get the scripts to actually run? Google searches hasn't yielded a whole lot...

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario   Conventional Structures   Columnstore   Δ SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Trying to determine the correct number of XFS allocation groups for postgresql server on Linux

    - by HBlend
    I am running a postgres 8.4.5 server on the linux 2.6.33.7 kernel on an 8 disk raid array with an LSI controller. Most of the tables are around 1GB or less. I know that XFS uses allocation groups (AG) to achieve I/O parallelism. My first question is, does this mean that if two tables are in the same AG, all I/O requests are queued to both of them if either is being read from/written to? If so, I assume I would want to spread my tables across as my allocation groups as possible, correct? Wouldn't this ensure that multiple users querying different tables would get the best performance?

    Read the article

  • SQL Sentry Truth-Telling and Disk Configuration

    - by AjarnMark
    Recently, SQL Sentry told me something about my SQL Server disk configurations that I just didn’t want to believe, but alas, it was true. Several days ago I posted my First Impressions of the SQL Sentry Power Suite.  Today’s post could fall into the category of, “Hey, as long as you have that fancy tool…”  Unfortunately, it also falls into the category of an overloaded worker taking someone else’s word for the truth, not verifying it with independent fact-checking, and then making decisions based on that.  Here’s my story… I’m not exactly an Accidental DBA (or Involuntary DBA as Paul Randal calls it).  I came to this company five years ago as a lead application developer with extensive experience in database design and development.  I worked my way into management, and along the way, took over the DBA responsibilities.  Fortunately, our systems run pretty smoothly most of the time, but I’m always looking for ways to make them better and to fit into my understanding of best practices.  When I took over as DBA, I inherited a SQL 2000 server with about 30 databases on it supporting our main systems, and a SQL 2005 server with multiple instances.  Both of these servers were configured with the Operating System and Application files on the C drive, data files on a different drive letter, and log files on a third drive letter.  Even before I took over as DBA, I verified that this was true with a previous server administrator, and that these represented actual separate disks.  He stated that they did, and I thought that all was well. Then one day, I’m poking around inside the SQL Sentry Performance Advisor, checking out features as I am evaluating whether to purchase the product, and I come across a Disk Configuration section.  The first thing I notice is that the drives do not have the proper partition offset, which was not at all surprising to me given the age of the installation and the relative newness of that topic.  But what threw me for a loop was that the graphic display appeared to be telling me that I did not in fact have three separate drives (or arrays) but rather had two, and that the log files were merely on a separate volume on the same physical array as the OS.  I figured that I must be reading it wrong so I scanned the Help file, but that just seemed to confirm my interpretation.  Then I thought, “there must be something wrong with the demo version of the software!  This can’t be right!”  But just to double-check, I went to our current server admin to talk it over with him, and sure enough, SQL Sentry was telling the truth! I was stunned!  I quickly went through the grieving process…denial…anger…reconciliation.  Here was something that I thought was such a basic truth that was turned upside down.  OK, granted, this wasn’t disastrous.  Our databases didn’t suddenly grind to a halt.  I didn’t get calls late at night inquiring about the sudden downturn in performance.  But it was a bit of a shock to the system, in a good way, to jolt me out of taking what I had believed as the truth for granted, and instead to Trust, but Verify! Yes, before someone else points it out, I know that there are”free” disk management tools built-in to Windows that would have told me the same thing if I had only looked at them; I did not have to buy a fancy tool to tell me that, but the fact is, until I was evaluating the tool, I had just gone with what I was told, and never bothered to check what was actually there. So, what things do you believe to be true but you actually never verified?

    Read the article

  • upgrade from windows server 2008 r2 std to enterprise

    - by Ravi
    We recently had to upgrade our servers from Win 2008 R2 Standard to Enterprise. The upgrade method was DISM. After the upgrade, a lot of weird things started to happen. Though Windows says it's been properly activated, we lost 1) RDP settings (disappeared from the Remote Setting tab of Properties of My Computer) 2) The option to join a domain is grayed out 3) The server had to 2 physical processors 12 cores each. Now, task manager or windows sees only 1 processor (or only 12 cores) 4) the amount of physical ram on this sever is 32 GB (Windows reports only 4 GB are usable) Has anyone encountered this before ? Thanks Ravi

    Read the article

  • Error with APE Server Installation

    - by sadmicrowave
    I was trying to install APE-Server from the .deb file at the ape-server homepage (www.ape-project.org) and I ran into an error so wanted to try removing the installation and reinstalling. I did a sudo apt-get remove ape-server which ran successfully but left ape-server folders in my /etc/ and /etc/init.d locations. Me being an idiot new comer to linux decided that manually delete those folders. Now when I reinstall the ape-server those folders don't get recreated and therefore I cannot send the /etc/init.d/ape-server [option] command because the folder is not found. When I try to sudo apt-get purge (or remove) ape-server I get the following sudo apt-get purge ape-server Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: ape-server* 0 upgraded, 0 newly installed, 1 to remove and 92 not upgraded. 1 not fully installed or removed. After this operation, 1,753kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 43924 files and directories currently installed.) Removing ape-server ... invoke-rc.d: unknown initscript, /etc/init.d/ape-server not found. dpkg: error processing ape-server (--purge): subprocess installed pre-removal script returned error exit status 100 update-rc.d: /etc/init.d/ape-server: file does not exist dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ape-server E: Sub-process /usr/bin/dpkg returned an error code (1) My question is; how do I remove all of the ape-server installation packages that were installed so I can reinstall from scratch?

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >