Search Results

Search found 85825 results on 3433 pages for 'sql data services'.

Page 116/3433 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Problem facing to run ruport from other machine

    - by shabi
    I am using SQL Server 2008 Reporting Services and set mode remotely. All is going fine and reports running on my machine. I am not using report viewer control, but switch to browser. Problem is that when I access the report from any other system in browser by providing required url. I m getting the following premission error: Server Error in /ReportServer Application. Access is denied: Description: An error is occured while accessing the resources required to serve for this request. You might have not premission to view the requested resources. Error message: 401.3 : You dont have the premission to view this directory or page using the creditinals you supplied. I have go through all step of this article "http://msdn.microsoft.com/en-us/library/ms365170.aspx" and set remotly premession but after all changes no success and getting same error. Please some one can tell me or provide step list, that how can I set the premession? that the report can run from other machine. Quick and detail response will

    Read the article

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • Using SQL Execution Plans to discover the Swedish alphabet

    - by Rob Farley
    SQL Server is quite remarkable in a bunch of ways. In this post, I’m using the way that the Query Optimizer handles LIKE to keep it SARGable, the Execution Plans that result, Collations, and PowerShell to come up with the Swedish alphabet. SARGability is the ability to seek for items in an index according to a particular set of criteria. If you don’t have SARGability in play, you need to scan the whole index (or table if you don’t have an index). For example, I can find myself in the phonebook easily, because it’s sorted by LastName and I can find Farley in there by moving to the Fs, and so on. I can’t find everyone in my suburb easily, because the phonebook isn’t sorted that way. I can’t even find people who have six letters in their last name, because also the book is sorted by LastName, it’s not sorted by LEN(LastName). This is all stuff I’ve looked at before, including in the talk I gave at SQLBits in October 2010. If I try to find everyone who’s names start with F, I can do that using a query a bit like: SELECT LastName FROM dbo.PhoneBook WHERE LEFT(LastName,1) = 'F'; Unfortunately, the Query Optimizer doesn’t realise that all the entries that satisfy LEFT(LastName,1) = 'F' will be together, and it has to scan the whole table to find them. But if I write: SELECT LastName FROM dbo.PhoneBook WHERE LastName LIKE 'F%'; then SQL is smart enough to understand this, and performs an Index Seek instead. To see why, I look further into the plan, in particular, the properties of the Index Seek operator. The ToolTip shows me what I’m after: You’ll see that it does a Seek to find any entries that are at least F, but not yet G. There’s an extra Predicate in there (a Residual Predicate if you like), which checks that each LastName is really LIKE F% – I suppose it doesn’t consider that the Seek Predicate is quite enough – but most of the benefit is seen by its working out the Seek Predicate, filtering to just the “at least F but not yet G” section of the data. This got me curious though, particularly about where the G comes from, and whether I could leverage it to create the Swedish alphabet. I know that in the Swedish language, there are three extra letters that appear at the end of the alphabet. One of them is ä that appears in the word Västerås. It turns out that Västerås is quite hard to find in an index when you’re looking it up in a Swedish map. I talked about this briefly in my five-minute talk on Collation from SQLPASS (the one which was slightly less than serious). So by looking at the plan, I can work out what the next letter is in the alphabet of the collation used by the column. In other words, if my alphabet were Swedish, I’d be able to tell what the next letter after F is – just in case it’s not G. It turns out it is… Yes, the Swedish letter after F is G. But I worked this out by using a copy of my PhoneBook table that used the Finnish_Swedish_CI_AI collation. I couldn’t find how the Query Optimizer calculates the G, and my friend Paul White (@SQL_Kiwi) tells me that it’s frustratingly internal to the QO. He’s particularly smart, even if he is from New Zealand. To investigate further, I decided to do some PowerShell, leveraging the Get-SqlPlan function that I blogged about recently (make sure you also have the SqlServerCmdletSnapin100 snap-in added). I started by indicating that I was going to use Finnish_Swedish_CI_AI as my collation of choice, and that I’d start whichever letter cam straight after the number 9. I figure that this is a cheat’s way of guessing the first letter of the alphabet (but it doesn’t actually work in Unicode – luckily I’m using varchar not nvarchar. Actually, there are a few aspects of this code that only work using ASCII, so apologies if you were wanting to apply it to Greek, Japanese, etc). I also initialised my $alphabet variable. $collation = 'Finnish_Swedish_CI_AI'; $firstletter = '9'; $alphabet = ''; Now I created the table for my test. A single field would do, and putting a Clustered Index on it would suffice for the Seeks. Invoke-Sqlcmd -server . -data tempdb -query "create table dbo.collation_test (col varchar(10) collate $collation primary key);" Now I get into the looping. $c = $firstletter; $stillgoing = $true; while ($stillgoing) { I construct the query I want, seeking for entries which start with whatever $c has reached, and get the plan for it: $query = "select col from dbo.collation_test where col like '$($c)%';"; [xml] $pl = get-sqlplan $query "." "tempdb"; At this point, my $pl variable is a scary piece of XML, representing the execution plan. A bit of hunting through it showed me that the EndRange element contained what I was after, and that if it contained NULL, then I was done. $stillgoing = ($pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange -ne $null); Now I could grab the value out of it (which came with apostrophes that needed stripping), and append that to my $alphabet variable.   if ($stillgoing)   {  $c=$pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange.RangeExpressions.ScalarOperator.ScalarString.Replace("'","");     $alphabet += $c;   } Finally, finishing the loop, dropping the table, and showing my alphabet! } Invoke-Sqlcmd -server . -data tempdb -query "drop table dbo.collation_test;"; $alphabet; When I run all this, I see that the Swedish alphabet is ABCDEFGHIJKLMNOPQRSTUVXYZÅÄÖ, which matches what I see at Wikipedia. Interesting to see that the letters on the end are still there, even with Case Insensitivity. Turns out they’re not just “letters with accents”, they’re letters in their own right. I’m sure you gave up reading long ago, and really aren’t that fazed about the idea of doing this using PowerShell. I chose PowerShell because I’d already come up with an easy way of grabbing the estimated plan for a query, and PowerShell does allow for easy navigation of XML. I find the most interesting aspect of this as the fact that the Query Optimizer uses the next letter of the alphabet to maintain the SARGability of LIKE. I’m hoping they do something similar for a whole bunch of operations. Oh, and the fact that you know how to find stuff in the IKEA catalogue. Footnote: If you are interested in whether this works in other languages, you might want to consider the following screenshot, which shows that in principle, it should work with Japanese. It might be a bit harder to run this in PowerShell though, as I’m not sure how it translates. In Hiragana, the Japanese alphabet starts ?, ?, ?, ?, ?, ...

    Read the article

  • How to obtain a random sub-datatable from another data table

    - by developerit
    Introduction In this article, I’ll show how to get a random subset of data from a DataTable. This is useful when you already have queries that are filtered correctly but returns all the rows. Analysis I came across this situation when I wanted to display a random tag cloud. I already had the query to get the keywords ordered by number of clicks and I wanted to created a tag cloud. Tags that are the most popular should have more chance to get picked and should be displayed larger than less popular ones. Implementation In this code snippet, there is everything you need. ' Min size, in pixel for the tag Private Const MIN_FONT_SIZE As Integer = 9 ' Max size, in pixel for the tag Private Const MAX_FONT_SIZE As Integer = 14 ' Basic function that retreives Tags from a DataBase Public Shared Function GetTags() As MediasTagsDataTable ' Simple call to the TableAdapter, to get the Tags ordered by number of clicks Dim dt As MediasTagsDataTable = taMediasTags.GetDataValide ' If the query returned no result, return an empty DataTable If dt Is Nothing OrElse dt.Rows.Count < 1 Then Return New MediasTagsDataTable End If ' Set the font-size of the group of data ' We are dividing our results into sub set, according to their number of clicks ' Example: 10 results -> [0,2] will get font size 9, [3,5] will get font size 10, [6,8] wil get 11, ... ' This is the number of elements in one group Dim groupLenth As Integer = CType(Math.Floor(dt.Rows.Count / (MAX_FONT_SIZE - MIN_FONT_SIZE)), Integer) ' Counter of elements in the same group Dim counter As Integer = 0 ' Counter of groups Dim groupCounter As Integer = 0 ' Loop througt the list For Each row As MediasTagsRow In dt ' Set the font-size in a custom column row.c_FontSize = MIN_FONT_SIZE + groupCounter ' Increment the counter counter += 1 ' If the group counter is less than the counter If groupLenth <= counter Then ' Start a new group counter = 0 groupCounter += 1 End If Next ' Return the new DataTable with font-size Return dt End Function ' Function that generate the random sub set Public Shared Function GetRandomSampleTags(ByVal KeyCount As Integer) As MediasTagsDataTable ' Get the data Dim dt As MediasTagsDataTable = GetTags() ' Create a new DataTable that will contains the random set Dim rep As MediasTagsDataTable = New MediasTagsDataTable ' Count the number of row in the new DataTable Dim count As Integer = 0 ' Random number generator Dim rand As New Random() While count < KeyCount Randomize() ' Pick a random row Dim r As Integer = rand.Next(0, dt.Rows.Count - 1) Dim tmpRow As MediasTagsRow = dt(r) ' Import it into the new DataTable rep.ImportRow(tmpRow) ' Remove it from the old one, to be sure not to pick it again dt.Rows.RemoveAt(r) ' Increment the counter count += 1 End While ' Return the new sub set Return rep End Function Pro’s This method is good because it doesn’t require much work to get it work fast. It is a good concept when you are working with small tables, let says less than 100 records. Con’s If you have more than 100 records, out of memory exception may occur since we are coping and duplicating rows. I would consider using a stored procedure instead.

    Read the article

  • Dokuwiki: Moving Just the data directory on other server

    - by amit
    I have installed dokuwiki on IIS7. As per my teams requirement we have to move just the Data directory to other server location. e.g - IIS7 installed Dokuwiki location: C:\inetpub\wwwroot\dokuwiki\conf - data location on the other server we want: U:\Archive\LP_Archive\SH_Systems\DEV01\dokuwiki So for doing that I followed pointers on dokuwiki install iis7 As per the above link, I tried adding IUSR to data folder permissions but its failing due to my insufficient privileges. And without that IUSR permission set on data folder I am getting an error as "The datadir ('pages') at is not found, isn't accessible or writable". Is there any other way to make it work? Is there any other account than IUSR I can use?

    Read the article

  • Using a service registry that doesn’t suck Part III: Service testing is part of SOA governance

    - by gsusx
    This is the third post of this series intended to highlight some of the principles of modern SOA governance solution. You can read the first two parts here: Using a service registry that doesn’t suck part I: UDDI is dead Using a service registry that doesn’t suck part II: Dear registry, do you have to be a message broker? This time I’ve decided to focus on what of the aspects that drives me ABSOLUTELY INSANE about traditional SOA Governance solutions: service testing or I should I say the lack of...(read more)

    Read the article

  • Using a service registry that doesn’t suck part II: Dear registry, do you have to be a message broker?

    - by gsusx
    Continuing our series of posts about service registry patterns that suck, we decided to address one of the most common techniques that Service Oriented (SOA) governance tools use to enforce policies. Scenario Service registries and repositories serve typically as a mechanism for storing service policies that model behaviors such as security, trust, reliable messaging, SLAs, etc. This makes perfect sense given that SOA governance registries were conceived as a mechanism to store and manage the policies...(read more)

    Read the article

  • Agile SOA Governance: SO-Aware and Visual Studio Integration

    - by gsusx
    One of the major limitations of traditional SOA governance platforms is the lack of integration as part of the development process. Tools like HP-Systinet or SOA Software are designed to operate by models on which the architects dictate the governance procedures and policies and the rest of the team members follow along. Consequently, those procedures are frequently rejected by developers and testers given that they can’t incorporate it as part of their daily activities. Having SOA governance products...(read more)

    Read the article

  • SO-Aware at the Atlanta Connected Systems User Group

    - by gsusx
    Today my colleague Don Demsak will be presenting a session about WCF management, testing and governance using SO-Aware and the SO-Aware Test Workbench at the Connected Systems User Group in Atlanta . Don is a very engaging speaker and has prepared some very cool demos based on lessons of real world WCF solutions. If you are in the ATL area and interested in WCF, AppFabric, BizTalk you should definitely swing by Don’s session . Don’t forget to heckle him a bit (you can blame it for it ;) )...(read more)

    Read the article

  • Tellago & Tellago Studios at Microsoft TechReady

    - by gsusx
    This week Microsoft is hosting the first edition of their annual TechReady conference. Even though TechReady is an internal conference, Microsoft invited us to present a not one but two sessions about some our recent work. We are particularly proud of the fact that one of those sessions is about our SO-Aware service registry. We see this as a recognition to the growing popularity of SO-Aware as the best Agile SOA governance solution in the Microsoft platform. Well, on Tuesday I had the opportunity...(read more)

    Read the article

  • How to call Office365 web service in a Console application using WCF

    - by ybbest
    In my previous post, I showed you how to call the SharePoint web service using a console application. In this post, I’d like to show you how to call the same web service in the cloud, aka Office365.In office365, it uses claims authentication as opposed to windows authentication for normal in-house SharePoint Deployment. For Details of the explanation you can see Wictor’s post on this here. The key to make it work is to understand when you authenticate from Office365, you get your authentication token. You then need to pass this token to your HTTP request as cookie to make the web service call. Here is the code sample to make it work.I have modified Wictor’s by removing the client object references. static void Main(string[] args) { MsOnlineClaimsHelper claimsHelper = new MsOnlineClaimsHelper( "[email protected]", "YourPassword","https://ybbest.sharepoint.com/"); HttpRequestMessageProperty p = new HttpRequestMessageProperty(); var cookie = claimsHelper.CookieContainer; string cookieHeader = cookie.GetCookieHeader(new Uri("https://ybbest.sharepoint.com/")); p.Headers.Add("Cookie", cookieHeader); using (ListsSoapClient proxy = new ListsSoapClient()) { proxy.Endpoint.Address = new EndpointAddress("https://ybbest.sharepoint.com/_vti_bin/Lists.asmx"); using (new OperationContextScope(proxy.InnerChannel)) { OperationContext.Current.OutgoingMessageProperties[HttpRequestMessageProperty.Name] = p; XElement spLists = proxy.GetListCollection(); foreach (var el in spLists.Descendants()) { //System.Console.WriteLine(el.Name); foreach (var attrib in el.Attributes()) { if (attrib.Name.LocalName.ToLower() == "title") { System.Console.WriteLine("> " + attrib.Name + " = " + attrib.Value); } } } } System.Console.ReadKey(); } } You can download the complete code from here. Reference: Managing shared cookies in WCF How to do active authentication to Office 365 and SharePoint Online

    Read the article

  • Navigating the Unpredictable Swinging of the Financial Regulation Pendulum

    - by Sylvie MacKenzie, PMP
    Written by Guest Blogger: Maureen Clifford, Sr Product Marketing Manager, Oracle The pendulum of the regulatory clock is constantly in motion, albeit often not in any particular rhythm.  Nevertheless, given what many insurers have been through economically, any movement can send shock waves through critical innovation and operational plans.  As pointed out in Deloitte’s 2012 Global Insurance Outlook, the impact of regulatory reform can cause major uncertainty in the area of costs.  As the reality of increasing government regulations settles in, the change that comes along with it creates more challenges in compliance and ultimately on delivering the optimum return on investment.  The result of this changing environment is a proliferation of compliance projects that must be executed with an already constrained set of resources, budget and time. Insurers are confronted by the need to gain visibility into all of their compliance efforts and proactively manage them. Currently that is very difficult to do as these projects often are being managed by groups across the enterprise and they lack a way to coordinate their efforts and drive greater synergies.  With limited visibility and equally limited resources it is no surprise that reporting on project status and determining realistic completion of these projects is only a dream. As a result, compliance deadlines are missed, penalties are incurred, credibility with key stakeholders and the public is jeopardized and returns and competitive advantage go unrealized. Insurers need to ask themselves some key questions: Do I have “one stop” visibility into all of my compliance efforts?  If not, what can I do to change that? What is top priority and how does that impact my already taxed resources? How can I figure out how to best balance my resources to get these compliance projects done as well as keep key innovation and operational efforts on track? How can ensure that I have all the requisite documentation for each compliance project I undertake? Dealing with complying with regulatory efforts is a necessary evil. Don't let the regulatory pendulum sideline your efforts to generate the greatest return on investment for your key stakeholders.

    Read the article

  • Is there a way to track data structure dependencies from the database, through the tiers, all the way out to a web page?

    - by Sean Mickey
    When we design applications, we generally end up with the same tiered sets of data structures: A persistent data structure that is described using DDL and implemented as RDBMS tables and columns. A set of domain objects that consist primarily of data structures, usually combined with business-rule level logic, that are implemented in a programming language such as Java. A set of service layer interfaces that directly support use case implementations (which use the domain data structures as parameters), implemented as EJBs or something equivalent in another programming language. UI screens that allow users to C reate, R etrieve, U pdate, and (maybe) D elete all manner of data structures and graphs of data structures, with numerous screens and with multiple UI widgets, all structured to support the same data structures. But if you want to change the data structures in any of these tiers, it always seems extremely difficult to assess the impact(s) the change will have across the application. UML can help, but tracing through diagram after diagram is not a real solution to this problem. The best I have ever seen was a homespun data tracking spreadsheet document that listed all of the data structures and walked the relationships from tier-to-tier. Is there a tool or accepted approach that makes it easy to identify a data structure in any tier and easily obtain a list of all dependent: database table and column data structures domain object data structures service layer interface methods and parameter data structures screen & UI component data structures

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

  • How to call Office365 web service in a Console application using WCF

    - by ybbest
    In my previous post, I showed you how to call the SharePoint web service using a console application. In this post, I’d like to show you how to call the same web service in the cloud, aka Office365.In office365, it uses claims authentication as opposed to windows authentication for normal in-house SharePoint Deployment. For Details of the explanation you can see Wictor’s post on this here. The key to make it work is to understand when you authenticate from Office365, you get your authentication token. You then need to pass this token to your HTTP request as cookie to make the web service call. Here is the code sample to make it work.I have modified Wictor’s by removing the client object references. static void Main(string[] args) { MsOnlineClaimsHelper claimsHelper = new MsOnlineClaimsHelper( "[email protected]", "YourPassword","https://ybbest.sharepoint.com/"); HttpRequestMessageProperty p = new HttpRequestMessageProperty(); var cookie = claimsHelper.CookieContainer; string cookieHeader = cookie.GetCookieHeader(new Uri("https://ybbest.sharepoint.com/")); p.Headers.Add("Cookie", cookieHeader); using (ListsSoapClient proxy = new ListsSoapClient()) { proxy.Endpoint.Address = new EndpointAddress("https://ybbest.sharepoint.com/_vti_bin/Lists.asmx"); using (new OperationContextScope(proxy.InnerChannel)) { OperationContext.Current.OutgoingMessageProperties[HttpRequestMessageProperty.Name] = p; XElement spLists = proxy.GetListCollection(); foreach (var el in spLists.Descendants()) { //System.Console.WriteLine(el.Name); foreach (var attrib in el.Attributes()) { if (attrib.Name.LocalName.ToLower() == "title") { System.Console.WriteLine("> " + attrib.Name + " = " + attrib.Value); } } } } System.Console.ReadKey(); } } You can download the complete code from here. Reference: Managing shared cookies in WCF How to do active authentication to Office 365 and SharePoint Online

    Read the article

  • Inside Sweden’s Nuclear Bunker Turned Data Center

    - by Jason Fitzpatrick
    A data center inside a decommissioned nuclear bunker is interesting enough, but one that looks as futuristic and awesome as the center under Stockholm begs to be seen. A hundred feet under the city of Stockholm is a decommissioned nuclear bunker that the government had previously leased out intermittently for various events, but it was never put to serious or extended use. Not until, that is,  Jon Karlung discovered the location and brought his vision of an ultra-modern, stylish, and secure data center to life. The passage from Wired’s write up of their photo tour that best encapsulates the feel of the bunker is: Most often data centers are built in boxy warehouses, so Bahnhof stands out as perhaps the world’s most stylish. In fact, it inspired Cisco IT Architect Douglas Alger to write a book on the world’s best-looking data centers. ”The idea that people were sitting in a design meeting and said, ‘what we need for our data center is waterfalls,’ that must have been a very fascinating discussion,” Alger says. Hit up the link below for the full photo tour. Deep Inside the James Bond Villain Lair That Actually Exists [Wired] Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • Using transactions with LINQ-to-SQL

    - by Jalpesh P. Vadgama
    Today one of my colleague asked that how we can use transactions with the LINQ-to-SQL Classes when we use more then one entities updated at same time. It was a good question. Here is my answer for that.For ASP.NET 2.0  or higher version have a new class called TransactionScope which can be used to manage transaction with the LINQ. Let’s take a simple scenario we are having a shopping cart application in which we are storing details or particular order placed into the database using LINQ-to-SQL. There are two tables Order and OrderDetails which will have all the information related to order. Order will store particular information about orders while OrderDetails table will have product and quantity of product for particular order.We need to insert data in both tables as same time and if any errors comes then it should rollback the transaction. To use TransactionScope in above scenario first we have add a reference to System.Transactions like below. After adding the transaction we need to drag and drop the Order and Order Details tables into Linq-To-SQL Classes it will create entities for that. Below is the code for transaction scope to use mange transaction with Linq Context. MyContextDataContext objContext = new MyContextDataContext(); using (System.Transactions.TransactionScope tScope = new System.Transactions.TransactionScope(TransactionScopeOption.Required)) { objContext.Order.InsertOnSubmit(Order); objContext.OrderDetails.InsertOnSumbit(OrderDetails); objContext.SubmitChanges(); tScope.Complete(); } Here it will commit transaction only if using blocks will run successfully. Hope this will help you. Technorati Tags: Linq,Transaction,System.Transactions,ASP.NET

    Read the article

  • SO-Aware sessions in Dallas and Houston

    - by gsusx
    Our WCF Registry: SO-Aware keeps being evangelized throughout the world. This week Tellago Studios' Dwight Goins will be speaking at Microsoft events in Dallas and Houston ( https://msevents.microsoft.com/cui/EventDetail.aspx?culture=en-US&EventID=1032469800&IO=ycqB%2bGJQr78fJBMJTye1oA%3d%3d ) about WCF management best practices using SO-Aware . If you are in the area and passionate about WCF you should definitely swing by and give Dwight a hard time ;)...(read more)

    Read the article

  • SQL Server 2005 to 2008 upgrade - are MDF files binary compatible?

    - by james
    I have 50 databases on a MS SQL Server 2005 system and want to upgrade to MS SQL Server 2008. This is what I tried on some test machines: 1. copied the \DATA directory from the source (MSSQL 2005) to exactly the same path on the target (MSSQL 2008) server. 2. edited the startup parameters on the MSSQL 2008 service to point to the path of the MSSQL 2005 master database. 3. restarted MSSQL service It worked and I can access all databases, tables and data. My questions are: I go back to SQL Server 4.2 and it has never been this easy. I know it worked, but should have it worked? Am I missing something, or is there going to be a gotcha next week? These are simple databases, with just tables, views and indexes. No cross database links, no triggers etc

    Read the article

  • Compare those hard-to-reach servers with SQL Snapper

    - by Michelle Taylor
    If you’ve got an environment which is at the end of an unreliable or slow network connection, or isn’t connected to your network at all, and you want to do a deployment to that environment – then pointing SQL Compare at it directly is difficult or impossible. While you could run SQL Compare locally on that environment, if it’s a server – especially if it’s a locked-down server – you probably don’t want to go through the hassle of using another activation on it. Or possibly you’re not allowed to install software at all, because you don’t have admin rights – but you can run user-mode software. SQL Snapper is a standalone, licensing-free program which takes SQL Compare snapshots of a database. It can create a snapshot within the context of that environment which can then be moved to your working environment to run SQL Compare against, allowing you to create a deployment script for environments you can’t get SQL Compare into. Where can I find it? You can find RedGate.SQLSnapper.exe in your SQL Compare installation directory – if you haven’t changed it, that will be something like C:\Program Files (x86)\Red Gate\SQL Compare 10 (or 11 if you’re using our SQL Server 2014 support beta). As well as copying the executable, you’ll also currently need to copy the System.Threading.dll and RedGate.SOCCompareInterface.dll files from the same directory alongside it. How do I use it? SQL Snapper’s UI is just a cut-down version of the snapshot creation UI in SQL Compare – just fill in the boxes and create your snapshot, then bring it back to the place you use SQL Compare to compare against your difficult-to-reach environment. SQL Snapper also has a command-line mode if you can’t run the UI in your target environment – just specify the server, database and output location with the /server, /database and /mksnap arguments, and optionally the username and password if you’re using SQL security, e.g.: RedGate.SQLSnapper.exe /database:yourdatabase /server:yourservername /username:youruser /password:yourpassword /mksnap:filename.snp What’s the catch? There are a few limitations of SQL Snapper in its current form – notably, it can’t read encrypted objects, and you’ll also currently need to copy the System.Threading.dll and RedGate.SOCCompareInterface.dll files alongside it, which we recognise is a little awkward in some environments. If you use SQL Snapper and want to share your experiences, or help us work on improving the experience in future, please comment here or leave a request on the SQL Compare UserVoice at https://redgate.uservoice.com/forums/141379-sql-compare.

    Read the article

  • Big Data Accelerator

    - by Jean-Pierre Dijcks
    For everyone who does not regularly listen to earnings calls, Oracle's Q4 call was interesting (as it mostly is). One of the announcements in the call was the Big Data Accelerator from Oracle (Seeking Alpha link here - slightly tweaked for correctness shown below):  "The big data accelerator includes some of the standard open source software, HDFS, the file system and a number of other pieces, but also some Oracle components that we think can dramatically speed up the entire map-reduce process. And will be particularly attractive to Java programmers [...]. There are some interesting applications they do, ETL is one. Log processing is another. We're going to have a lot of those features, functions and pre-built applications in our big data accelerator."  Not much else we can say right now, more on this (and Big Data in general) at Openworld!

    Read the article

  • Willy Rotstein on Analytics and Social Media in Retail

    - by sarah.taylor(at)oracle.com
    Recently I came across a presentation from Dan Zarrella on "The Science of Retweets. (http://www.slideshare.net/HubSpot/the-science-of-retweets-with-dan-zarrella). It is an insightful, fact-based analysis of how tweets propagate and what makes them successful. The analysis is of course very interesting for those of us interested Tweeting. However, what really caught my attention is how well it illustrates, form a very different angle, some of the issues I am discussing with retailers these days. In particular the opportunities that e-commerce and social media open to those retailers with the appetite and vision to tackle the associated analytical challenges. And these challenges are of course not straightforward.   In his presentation Dan introduces the concept of Observability, I haven't had the opportunity to discuss with Dan his specific definition for the term. However, in practical retail terms, I would say that it means that through social media (and other web channels such as search) we can analyze and track processes by measuring Indicators that were not measurable before. The focus is in identifying patterns across a large number of consumers rather than what a particular individual "Likes".   The potential impact for retailers is huge. It opens the opportunity to monitor changes in consumer preference  and plan the business accordingly. And you can do this almost "real time" rather than through infrequent surveys that provide a "rear view" picture of your consumer behaviour. For instance, you could envision identifying when a particular set of fashion styles are breaking out from the pack, and commit a re-buy. Or you could monitor when the preference for a specific mobile device has declined and hence markdowns should be considered; or how demand for a specific ready-made food typically flows across regions and manage the inventory accordingly. Search, blogging, website and store data may need to be considered in identifying these trends. The data volumes involved are huge (check Andrea Morgan's recent post on "Big Data" in retail) but so are the benefits. As Andrea says, for the first time we can start getting insight into "Why" the business is performing in a certain way rather than just reporting on what is happening. And it is not just about the data volumes. Tackling the challenge also calls for integrated planning systems that can bring data and insight into the context of the Decision Making process Buyers, Merchandisers and Supply Chain managers are following. I strongly believe that only when data and process come together you can move from the anecdotal to systematically improving business performance.   I would love to hear your opinions on these trends and where you think Retail is heading to exploit these topics - please email me: [email protected]

    Read the article

  • Elevating Customer Experience through Enterprise Social Networking

    - by john.brunswick
    I am not sure about most people, but I really dislike automated call center routing systems. They are impersonal and convey a sense that the company I am dealing with does not see the value of providing customer service that increases positive perception of their brand. By the time I am connected with a live support representative I am actually more frustrated than before I originally dialed. Each time a company interacts with its customers or prospects there is an opportunity to enhance that relationship. Technical enablers like call center routing systems can be a double edged sword - providing process efficiencies, but removing the human context of some interactions that can build a lot of long term value and create substantial repeat business. Certain web systems, available through "chat with a representative" now links on some web sites, provide a quick and easy way to get in touch with someone and cut down on help desk calls, but miss the opportunity to deliver an even more personal experience to customers and prospects. As more and more users head to the web for self-service and product information, the quality of this interaction becomes critical to supporting a company's brand image and viability. It takes very little effort to go a step further and elevate customer experience, without adding significant cost through social enterprise software technologies. Enterprise Social Networking Social networking technologies have slowly gained footholds in the enterprise, evolving from something that people may have been simply curious about, to tools that have started to provide tangible value in the enterprise. Much like instant messaging, once considered a toy in the enterprise, expertise search, blogs as communications tools, wikis for tacit knowledge sharing are all seeing adoption in a way that is directly applicable to the business and quickly adding value. So where does social networking come in when trying to enhance customer experience?

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >