Search Results

Search found 15 results on 1 pages for 'miked'.

Page 1/1 | 1 

  • Query Logging in Analysis Services

    - by MikeD
    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another tableThe SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read the article

  • Missing ODBCConfig for SQLite and LibreOffice Base

    - by MikeD
    I have used OpenOffice Base as a front end for SQlite databases, in 10.04 linked via ODBC. I am updating to 12.04, so I loaded LibreOffice Base, looks just like OObase. I have 12.04 on one drive and 10.04 on another. I loaded Sqliteman, sqlite3, unixodbc-bin, unixodbc, libsqliteodbc, sqlitebrowser. I copied my databases directory over. But in terminal ODBCConfig is not on the 12.04 system. So I copied odbc.ini from the home directory on 10.04 to 12.04 home directory, and now LibreOffice Base can access my database, and all is fine. Anyone know what is the problem with ODBCConfig, is it another victim of the change in QT? Of course I could edit odbc.ini with jedit, now I can see the format. In terminal I tried sudo find / -name ODBCConfig, but its not there.

    Read the article

  • A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

    - by MikeD
    I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort. First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.(It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:1. Step 1: Determine if this server is hosting the primary replica.This is a TSQL step using this script:declare @agName sysname = 'AGTest'set nocount on declare @primaryReplica sysnameselect @primaryReplica = agState.primary_replicafrom sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where ag.name = @AGname if not exists(   select *    from sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where @@Servername = agstate.primary_replica    and ag.name = @AGname)begin   raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica) endThis script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error. For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.Job Step 2: Update the CNAME entry in DNS with this server's name.I used a PowerShell script to accomplish this:$cname = "SPSQL.contoso.com"$query = "Select * from MicrosoftDNS_CNAMEType"$dns1 = "dc01.contoso.com"$dns2 = "dc02.contoso.com"if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true){    $dnsServer = $dns1}elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true) {   $dnsServer = $dns2}else{  $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2   Throw $msg}$record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }$thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."$currentServer = $record.RecordData if ($currentServer -eq $thisServer ) {     $cname + " CNAME is up to date: " + $currentServer}else{    $cname + " CNAME is being updated to " + $thisServer + ". It was " + $currentServer    $record.RecordData = $thisServer    $record.put()}This script does a few things:finds a responsive domain controller (Test-Connection does a ping and returns a Boolean value if you specify the -Quiet parameter)makes a WMI call to the domain controller to get the current CNAME record value (Get-WmiObject)gets the FQDN of this server (GetHostEntry)checks if the CNAME record is correct and updates it if necessary(You should update the values of the variables $cname, $dns1 and $dns2 for your environment.)Since my domain controllers are also hosted in Azure VMs, either one of them could be down at any point in time, so I need to find a DC that is responsive before attempting the DNS call. The other little thing here is that the CNAME record contains the FQDN of a machine, plus it ends with a period. So the comparison of the CNAME record has to take the trailing period into account. When I tested this step, I was getting ACCESS DENIED responses from PowerShell for the Get-WmiObject cmdlet that does a remote lookup on the DC. This occurred because the SQL Agent service account was not a member of the Domain Admins group, so I decided to create a SQL Credential to store the credentials for a domain administrator account and use it as a PowerShell proxy (rather than give the service account Domain Admins membership).In SQL Management Studio, right click on the Credentials node (under the server's Security node), and choose New Credential...Then, under SQL Agent-->Proxies, right click on the PowerShell node and choose New Proxy...Finally, in the job step properties for the PowerShell step, select the new proxy in the Run As drop down.I created this two step Job on both nodes of the Availability Group, but if you had more than two nodes, just create the same job on all the servers. I set the schedule for the job to execute every minute.When the server that is hosting the primary replica is running the job, the job history looks like this:The job history on the secondary server looks like this: When a failover occurs, the SQL Agent job on the new primary replica will detect that the CNAME needs to be updated within a minute. Based on the TTL of the CNAME (which I said at the beginning was 5 minutes), the SharePoint servers will get the new alias within five minutes and should be able to reconnect. I may want to shorten up the TTL to reduce the time it takes for the client connections to use the new alias. Using a DNS CNAME and a SQL Agent Job on all servers hosting AG replicas, I was able to create a pseudo-listener to automatically change the name of the server that was hosting the primary replica, for a scenario where I cannot use a regular AG listener (in this case, because the servers are all hosted in Azure).    

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Imaginet is hiring

    - by MikeD
    We have an immediate need for new staff members!    Project Manager Systems Analysts Test Manager Web Developer Database Admin Please contact me if you are interested.

    Read the article

  • Acad 14 SOLIDS disappeared, DWG file doesn't open correctly

    - by MikeD
    I made a drawing in Autocad 14 (Win XP) containing of mostly SOLID's. I saved and re-opened it multiple times without any problem. Before I last saved it I viewed my drawing using the SHADE functions. After re-opening all my SOLIDS have disappeared. I spent numerous hours searching for a solution, tried SATFIX.ARX , AUDIT, RECOVER without success (ACIS error - which should be gone after applying SATFIX), changed my computer locale from German (decimal = comma) to English (decimal = dot) but my screen remains empty (and yes I tried to recover from a .BAK, too) I also tried to export (the non-display drawing) into DXF and can confirm that all my objects are in there, but re-opening the DXF results in a huge ACIS error list again I am desperate - please can someone help - thanks! Mike

    Read the article

  • how to disable a jqgrid select list (dropdown) on the edit form

    - by MikeD
    This is strange and any alternative method to what I want to accomplish is welcome. My app uses the jqgrid 3.5.3 and I need to disable a select list on my edit form. When I do so using the code displayed below it breaks the edit form - meaning I can not cancel or submit it. Thanks. This code is in the edit options array of the navGrid method. The the dropdown is the 'serv_descr' field. The others are text boxes and don't pose a problem. The form does come up and the field is disabled - its just broken. beforeShowForm: function(eparams) { document.getElementById('equip_id').disabled = true; document.getElementById('service_dt').disabled = true; document.getElementById('serv_descr').disabled = true; document.getElementById('calc_next_svc').checked = 'true'; }

    Read the article

  • Stream Reuse in C#

    - by MikeD
    I've been playing around with what I thought was a simple idea. I want to be able to read in a file from somewhere (website, filesystem, ftp), perform some operations on it (compress, encrypt, etc.) and then save it somewhere (somewhere may be a filesystem, ftp, or whatever). It's a basic pipeline design. What I would like to do is to read in the file and put it onto a MemoryStream, then perform the operations on the data in the MemoryStream, and then save that data in the MemoryStream somewhere. I was thinking I could use the same Stream to do this but run into a couple of problems: Everytime I use a StreamWriter or StreamReader I need to close it and that closes the stream so that I cannot use it anymore. That seems like there must be some way to get around that. Some of these files may be big and so I may run out of memory if I try to read the whole thing in at once. I was hoping to be able to spin up each of the steps as separate threads and have the compression step begin as soon as there is data on the stream, and then as soon as the compression has some compressed data available on the stream I could start saving it (for example). Is anything like this easily possible with the C# Streams? ANyone have thoughts as to how to accomplish this best? Thanks, Mike

    Read the article

  • ASP.NET MVC- Bizarre problem - suddennly lost all LINQTO SQL data context objects

    - by MikeD
    I was making an edit to a long existing project. Specifically I added some fields to a table and had to delete the table from the LINQTOSQL designer and re-add it. doesn't Also had to do the same for a view. Mode some other code changes and went to build . Now my project won't build because it can't resolve any of the data context objects (all tables and views) in my code. I don't know what I did or how this happeened. I have many tables and views in the project's L2S data context so I don't wont to try and do over. Please any suggestions on how to resolve this problem are greatly appreciated. Desparate! The error messages I am getting are the familiar The type or namespace name 'equipment' could not be found (are you missing a using directive or an assembly reference?)

    Read the article

  • Create bookmark into 1st column of MSWord table

    - by MikeD
    Hello All, does anyone have a VBA code to create a bookmark into the first column of an MSWord table? Let's say I have a table looking like this .-----.----------------. . ref . Title . .-----.----------------. . 1 . Title 1 . .-----.----------------. . 2 . Title 2 . .-----.----------------. . foo . Title 3 . .-----.----------------. . bar . Title 4 . .-----.----------------. and I want a VBA code fragment that creates a bookmark named "T1_1" on the string "1" in row 2 / column 1, and bookmarks named "T1_2", "T1_foo" and "T1_bar" on the strings in the other cells of column 1. I don't mind to hardcode the prefix "T1" (and substitute for other tables each time). I don't mind to select tables before running the macro, I don't mind giving those cells a special format, and I don't mind to get a superfluous bookmark "T1_ref" from the first row - so the code doesn't need to distinguish between table title and table row. Thanks a lot in advance

    Read the article

  • cast operator to base class within a thin wrapper derived class

    - by miked
    I have a derived class that's a very thin wrapper around a base class. Basically, I have a class that has two ways that it can be compared depending on how you interpret it so I created a new class that derives from the base class and only has new constructors (that just delegate to the base class) and a new operator==. What I'd like to do is overload the operator Base&() in the Derived class so in cases where I need to interpret it as the Base. For example: class Base { Base(stuff); Base(const Base& that); bool operator==(Base& rhs); //typical equality test }; class Derived : public Base { Derived(stuff) : Base(stuff) {}; Derived(const Base& that) : Base(that) {}; Derived(const Derived& that) : Base(that) {}; bool operator==(Derived& rhs); //special case equality test operator Base&() { return (Base&)*this; //Is this OK? It seems wrong to me. } }; If you want a simple example of what I'm trying to do, pretend I had a String class and String==String is the typical character by character comparison. But I created a new class CaseInsensitiveString that did a case insensitive compare on CaseInsensitiveString==CaseInsensitiveString but in all other cases just behaved like a String. it doesn't even have any new data members, just an overloaded operator==. (Please, don't tell me to use std::string, this is just an example!) Am I going about this right? Something seems fishy, but I can't put my finger on it.

    Read the article

  • How do I combine these similar linq queries into one?

    - by MikeD
    This is probably a basic LINQ question. I have need to select one object and if it is null select another. I'm using linq to objects in the following way, that I know can be done quicker, better, cleaner... public Attrib DetermineAttribution(Data data) { var one = from c in data.actions where c.actionType == Action.ActionTypeOne select new Attrib { id = c.id, name = c.name }; if( one.Count() > 0) return one.First(); var two = from c in data.actions where c.actionType == Action.ActionTypeTwo select new Attrib { id = c.id, name = c.name }; if (two.Count() > 0 ) return two.First(); } The two linq operations differ only on the where clause and I know there is a way to combine them. Any thoughts would be appreciated.

    Read the article

  • MS Dev Studio 2005 Ignores Preprocessor directives during compile

    - by miked
    We just got a new developer and I'm trying to set him up with Dev Studio 2005 (The version we all use at this office), and we're running into a weird problem that I've never seen before. I have some code that works perfectly on my system, and he can't seem to get it compiled. We've tracked the issue down to his copy of dev studio ignoring the preprocessor directives. For example, in the project properties under C/C++|Preprocessor|Preprocessor Directives, I add DEFINE_ME. Which should translate to a /D"DEFINE_ME" for the compiler. And it does in my development environment, but it doesn't on his. I verified that when he checks out the code from the source repository, that he has the same version of the code I do. And if I look in his Project Properties, all of the directives are there. For some reason they're just not getting passed down to the compiler. Any Ideas?

    Read the article

  • Visual Studio 2005 Ignores Preprocessor directives during compile

    - by miked
    We just got a new developer and I'm trying to set him up with Dev Studio 2005 (The version we all use at this office), and we're running into a weird problem that I've never seen before. I have some code that works perfectly on my system, and he can't seem to get it compiled. We've tracked the issue down to his copy of dev studio ignoring the preprocessor directives. For example, in the project properties under C/C++|Preprocessor|Preprocessor Directives, I add DEFINE_ME. Which should translate to a /D"DEFINE_ME" for the compiler. And it does in my development environment, but it doesn't on his. I verified that when he checks out the code from the source repository, that he has the same version of the code I do. And if I look in his Project Properties, all of the directives are there. For some reason they're just not getting passed down to the compiler. Any Ideas?

    Read the article

  • Manipulating the case of the build macros in Visual Studio: $(TargetDir)

    - by miked
    I've come across a weird problem today in Visual Studio 2005. If I create a new configuration "NewConfiguration" for one of my projects. The output directory referred to by the project in $(TargetDir) is "/path/to/build/area/newconfiguration". Note the loss of capital letters, I'd expect it to be in "/path/to/build/area/NewConfiguration". In another project that I created yesterday, the capital letters are there. Normally this would be a problem, but it's part of a fairly complicated build system where some of it's on unix and we need to worry about case sensitive filename. Does anyone know where the source string for the Visual Studio macros like $(TargetDir) are stored so that I can get the case to be consistent and match what we need for our build system?

    Read the article

1