Search Results

Search found 88156 results on 3527 pages for 'code contracts'.

Page 521/3527 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • Google Chrome Extensions: Launch Event (part 2)

    Google Chrome Extensions: Launch Event (part 2) Video Footage from the Google Chrome Extensions launch event on 12/09/09. Aaron Boodman and Erik Kay technical leads for the Google Chrome extensions team present a quick history of the extensions system of Google Chrome and discuss its design principles, focusing on why extensions are webby. From: GoogleDevelopers Views: 3035 12 ratings Time: 05:25 More in Science & Technology

    Read the article

  • GDD-BR 2010 [2E] Building Business Apps using Google Web Toolkit and Spring Roo

    GDD-BR 2010 [2E] Building Business Apps using Google Web Toolkit and Spring Roo Speaker: Chris Ramsdale Track: Cloud Computing Time: 14:40 - 15:25 Room: sala[2] Level: 201 Who says you can't build rich web apps for your business? Follow along in this session to learn how you can use the latest integrated set of tools from Google and VMware to take your internal business apps into the cloud. We'll cover how to get started using GWT with Spring Roo and SpringSource Tool Suite (STS), as well as the new data presentation widgets and MVP framework that will be available in the 2.1 release of GWT. From: GoogleDevelopers Views: 69 0 ratings Time: 45:56 More in Science & Technology

    Read the article

  • MIX 2010 recap podcast

    - by Chris Williams
    from www.slickthought.net: Spaghetti Code Podcast: Recapping the MIX Conference 4/2/2010 11:06:10 AM Spaghetti Code (Jeff Brand) is joined by Mike Hodnic, Jason Bock, Adam Grocholski and Chris Williams to share their thoughts and impressions from the Microsoft MIX Conference and their thoughts on Windows Phone, Silverlight 4, and more. Direct Download - click here Subscribe - click here iTunes - click here

    Read the article

  • YouTube API Office Hours May 23, 2012

    YouTube API Office Hours May 23, 2012 This is a recording of the YouTube API Hangout on Air from Wednesday 5/23 at 10am PDT (UTC-7) Jeffrey Posnick spoke about the new CORS support in the YouTube API. JJ Shannon Behrens with Jarek Wilkiewicz covered YouTube sessions schedule at Google I/O (developers.google.com Our special guests were Dror Shimshowitz and Aj Crane from the YouTube Product Management team. Dror and AJ gave a short overview of an exciting session they have coming up at Google I/O. Topics: * YouTube Channels: Get with the Program! * Getting Direct Feedback from your YouTube Community * Mobile YouTube API Apps for Content Creators, Curators and Consumers * HTML5 at YouTube: Stories from the Front Line * YouTube API + Cloud Rendering = Happy Mobile Gamers * New YouTube Android Player Tools (Session + Codelab) * Master the Latest YouTube Data API (Codelab) * Webinar: YouTube for Your Business * Webinar: Using YouTube APIs and Ruby on Rails for Educational Apps From: GoogleDevelopers Views: 649 16 ratings Time: 46:44 More in Science & Technology

    Read the article

  • The Breakpoint Ep. 4 —The Tour De Timeline

    The Breakpoint Ep. 4 —The Tour De Timeline Ask and vote for questions at: goo.gl The DevTools' Timeline shows the heartbeat and health of your application's performance. In this episode we'll do a deep deep dive into how to uncover the cost of internal browser operations like parsing HTML, decoding images, invalidating layout geometry and painting to screen. Paul and Addy will show you how best to approach improving the performance of your CSS and JS. From: GoogleDevelopers Views: 0 0 ratings Time: 01:00:00 More in Science & Technology

    Read the article

  • ASP.NET MVC JavaScript Routing

    - by zowens
    Have you ever done this sort of thing in your ASP.NET MVC view? The weird thing about this isn’t the alert function, it’s the code block containing the Url formation using the ASP.NET MVC UrlHelper. The terrible thing about this experience is the obvious lack of IntelliSense and this ugly inline JavaScript code. Inline JavaScript isn’t portable to other pages beyond the current page of execution. It is generally considered bad practice to use inline JavaScript in your public-facing pages. How ludicrous would it be to copy and paste the entire jQuery code base into your pages…? Not something you’d ever consider doing. The problem is that your URLs have to be generated by ASP.NET at runtime and really can’t be copied to your JavaScript code without some trickery. How about this? Does the hard-coded URL bother you? It really bothers me. The typical solution to this whole routing in JavaScript issue is to just hard-code your URLs into your JavaScript files and call it done. But what if your URLs change? You have to now go an track down the places in JavaScript and manually replace them. What if you get the pattern wrong? Do you have tests around it? This isn’t something you should have to worry about.   The Solution To Our Problems The solution is to port routing over to JavaScript. Does that sound daunting to you? It’s actually not very hard, but I decided to create my own generator that will do all the work for you. What I have created is a very basic port of the route formation feature of ASP.NET routing. It will generate the formatted URLs based on your routing patterns. Here’s how you’d do this: Does that feel familiar? It looks a lot like something you’d do inside of your ASP.NET MVC views… but this is inside of a JavaScript file… just a plain ol’ .js file.  Your first question might be why do you have to have that “.toUrl()” thing. The reason is that I wanted to make POST and GET requests dead simple. Here’s how you’d do a POST request (and the same would work with a GET request):   The first parameter is extra data passed to the post request and the second parameter is a function that handles the success of the POST request. If you’re familiar with jQuery’s Ajax goodness, you’ll know how to use it. (if not, check out http://api.jquery.com/jQuery.Post/ and the parameters are essentially the same). But we still haven’t gotten rid of the magic strings. We still have controller names and action names represented as strings. This is going to blow your mind… If you’ve seen T4MVC, this will look familiar. We’re essentially doing the same sort of thing with my JavaScript router, but we’re porting the concept to JavaScript. The good news is that parameters to the controllers are directly reflected in the action function, just like T4MVC. And the even better news… IntlliSense is easily transferred to the JavaScript version if you’re using Visual Studio as your JavaScript editor. The additional data parameter gives you the ability to pass extra routing data to the URL formatter.   About the Magic You may be wondering how this all work. It’s actually quite simple. I’ve built a simple jQuery pluggin (called routeManager) that hangs off the main jQuery namespace and routes all the URLs. Every time your solution builds, a routing file will be generated with this pluggin, all your route and controller definitions along with your documentation. Then by the power of Visual Studio, you get some really slick IntelliSense that is hard to live without. But there are a few steps you have to take before this whole thing is going to work. First and foremost, you need a reference to the JsRouting.Core.dll to your projects containing controllers or routes. Second, you have to specify your routes in a bit of a non-standard way. See, we can’t just pull routes out of your App_Start in your Global.asax. We force you to build a route source like this: The way we determine the routes is by pulling in all RouteSources and generating routes based upon the mapped routes. There are various reasons why we can’t use RouteCollection (different post for another day)… but in this case, you get the same route mapping experience. Converting the RouteSource to a RouteCollection is trivial (there’s an extension method for that). Next thing you have to do is generate a documentation XML file. This is done by going to the project settings, going to the build tab and clicking the checkbox. (this isn’t required, but nice to have). The final thing you need to do is hook up the generation mechanism. Pop open your project file and look for the AfterBuild step. Now change the build step task to look like this: The “PathToOutputExe” is the path to the JsRouting.Output.exe file. This will change based on where you put the EXE. The “PathToOutputJs” is a path to the output JavaScript file. The “DicrectoryOfAssemblies” is a path to the directory containing controller and routing DLLs. The JsRouting.Output.exe executable pulls in all these assemblies and scans them for controllers and route sources.   Now that wasn’t too bad, was it :)   The State of the Project This is definitely not complete… I have a lot of plans for this little project of mine. For starters, I need to look at the generation mechanism. Either I will be creating a utility that will do the project file manipulation or I will go a different direction. I’d like some feedback on this if you feel partial either way. Another thing I don’t support currently is areas. While this wouldn’t be too hard to support, I just don’t use areas and I wanted something up quickly (this is, after all, for a current project of mine). I’ll be adding support shortly. There are a few things that I haven’t covered in this post that I will most certainly be covering in another post, such as routing constraints and how these will be translated to JavaScript. I decided to open source this whole thing, since it’s a nice little utility I think others should really be using. Currently we’re using ASP.NET MVC 2, but it should work with MVC 3 as well. I’ll upgrade it as soon as MVC 3 is released. Along those same lines, I’m investigating how this could be put on the NuGet feed. Show me the Bits! OK, OK! The code is posted on my GitHub account. Go nuts. Tell me what you think. Tell me what you want. Tell me that you hate it. All feedback is welcome! https://github.com/zowens/ASP.NET-MVC-JavaScript-Routing

    Read the article

  • What groupware/project-management apps (preferably self-hosted webapp) do you recommend for a small dev shop?

    - by HedgeMage
    I run a small Drupal consulting shop and we've been trying different groupware solutions for what seems like ages, yet nothing we've found seems to be a good fit. We don't need CRM-overkill such as SugarCRM offers -- it's just too much for our small size. We do need git integration (at a minimum, an easy way to associate commits with issues) Time tracking on configurable or 15m increments per-project issue tracking billing (incl. recurring billing for support contracts, etc) some sort of per-project notes/wiki for things like login credentials, client contact info, etc. Contact logging (Client foo called at 2:20pm and asked to add bar to the spec, signed addendum with pricing due to client NLT CoB today, to be returned by CoB tomorrow) Open source solutions are greatly preferred to closed ones Most of all, it should be very efficient to use. Several solutions just fell out of use here because they required too many clicks for simple, frequent tasks like logging time spent on an issue or noting a call from a client. It shouldn't take 20 minutes to make a note. Edit: I almost forgot to mention: we're a mixed Linux/Mac shop with no Windows users.

    Read the article

  • Chrome Apps Office Hours

    Chrome Apps Office Hours Ask and vote for questions here: goo.gl Now that you've got a handle on what Chrome Apps are and what they can do, we're going to build an app live, and dive into the new Windowing API to show you how you can completely configure the look and feel of your Chrome App window. We'll also explain more about Content Security Policy, and how it might affect your development. Remember, we want to hear from you! What are the APIs that you're most interested and excited about? Tell us at goo.gl so we can cover the things you're most interested in first! Be sure to add this event to your calendar and tune in next Tuesday! From: GoogleDevelopers Views: 1504 36 ratings Time: 44:00 More in Science & Technology

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris Stevens
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • Chrome Apps Office Hours: Networking APIs

    Chrome Apps Office Hours: Networking APIs Ask and vote for questions at goo.gl Writing a web server in a web browser is a little meta, but it's one of the many new scenarios unlocked by the new APIs available in Chrome Apps! Join Paul Kinlan and Pete LePage as they explore the networking APIs available to Chrome Apps. They'll show you how you you can write your own web server, and connect to other devices via the network. While the web server in a browser may not be a common scenario, accessing the TCP and UDP stack is extremely powerful and will allow your apps to connect to other hardware or services like never before. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • GDD-BR 2010 [2F] Storage, Bigquery and Prediction APIs

    GDD-BR 2010 [2F] Storage, Bigquery and Prediction APIs Speaker: Patrick Chanezon Track: Cloud Computing Time slot: F [15:30 - 16:15] Room: 2 Level: 101 Google is expanding our storage products by introducing Google Storage for Developers. It offers a RESTful API for storing and accessing data at Google. Developers can take advantage of the performance and reliability of Google's storage infrastructure, as well as the advanced security and sharing capabilities. We will demonstrate key functionality of the product as well as customer use cases. Google relies heavily on data analysis and has developed many tools to understand large datasets. Two of these tools are now available on a limited sign-up basis to developers: (1) BigQuery: interactive analysis of very large data sets and (2) Prediction API: make informed predictions from your data. We will demonstrate their use and give instructions on how to get access. From: GoogleDevelopers Views: 1 0 ratings Time: 39:27 More in Science & Technology

    Read the article

  • Google I/O 2012 - Designing for the Other Half: Sexy Isn't Always Pink

    Google I/O 2012 - Designing for the Other Half: Sexy Isn't Always Pink Leah Busque, Sepideh Nasiri, Jess Lee, Tracy Chou, Margaret Wallace Women control 80 percent of consumer spending and drive the majority of user activity on many of the largest social networks. Female gamers over 55 spend the most time online gaming among any demographic. Are you thinking about how your product or business is attracting and engaging women? Hear from our panel on the technologies winning over female users that aren't so pink. From: GoogleDevelopers Views: 15 1 ratings Time: 59:33 More in Science & Technology

    Read the article

  • Google I/O 2012 - HTML5 and App Engine: The Epic Tag Team Take on Modern Web Apps at Scale

    Google I/O 2012 - HTML5 and App Engine: The Epic Tag Team Take on Modern Web Apps at Scale Brad Abrams, Ido Green This talk discusses the latest and greatest application patterns and toolset for building cutting edge HTML5 applications that are backed by App Engine. This makes it incredibly easy to write an app that spans client and server; in particular, authentication just works out of the box. This talk walks through building a fantastic cloud-based HTML5 application For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 20 0 ratings Time: 59:50 More in Science & Technology

    Read the article

  • Google+ Platform Office Hours: I/O Recap

    Google+ Platform Office Hours: I/O Recap This week we talked about Google I/O and reviewed some of the new Google+ platform features that were announced. Join the discussion about this session on Google+: goo.gl 0:10 - Introductions 2:55 - Stories about Google I/O 2012 #io12 8:58 - The Sun is introduced 9:40 - A brief introduction to the History API 15:56 - Sign up for the History API developer preview 17:13 - How to request a new moment type 17:54 - Abraham and the History API at #iohack 19:33 - Is the History API a Google+ write API? 21:03 - The Sun joins our office hours (Thanks Chris Ridgeway!) 24:00 - Does the history API work in a hangout yet? 24:55 - Can Google+ Pages use the history API? 26:40 - Should I use the official ruby Google API client library? 28:48 - Should I index Google+ users by their profile ID or their email address? 29:50 - Hangouts at I/O 34:58 - Will Google+ history work with Gmail? 36:05 - Does comments tracker work with events? 36:25 - When will Hangouts On Air work in Germany? 36:23 - Can we have screen capture of hangout video for use in the History API? 39:50 - Can I run more than one Hangout App simultaneously? From: GoogleDevelopers Views: 242 12 ratings Time: 41:16 More in Science & Technology

    Read the article

  • Google Games Chat, Episode 2

    Google Games Chat, Episode 2 This is part two of the Google Games Chat series. Episode 1 ended abruptly, but if you want a sneak preview, check it out here: www.youtube.com Yeah! The Google Games Chat is back! Join the Google games crew as we talk about interesting industry trends, discuss challenges facing today's game developers, answer your hard hitting questions, and figure out why our first video never made it onto YouTube. Ask us questions in the Google Moderator section below, or else this might just be another 45 minutes of awkward silence. From: GoogleDevelopers Views: 2140 43 ratings Time: 47:53 More in Science & Technology

    Read the article

  • Frustrated where I am, but not sure where to go with my career [closed]

    - by Tom Pickles
    I work (3 years now) as a lead developer for a team developing internal tools and websites for a customer account within large outsourcing company. I'm a self taught programmer and my previous incarnation was a 3rd line support guy, so I have a solid infrastructure knowledge. We use VB.Net/MSSQL/SSIS/SSRS ASP.NET (nTier) in house and I have about 8 years coding experience. Without going into too much detail, my boss is very ambitious and uses our team as his footing to get up the ladder. I've been in the team from the start and the only new dev's we have brought in have been people with a bit of VBA/VBScript experience, much to my chagrin, to bolster his empire. It's been a lot of hard work to bring them up to a standard, but there's still a lot for them to learn. This makes my life stressful as I always get the high profile/complex project work to do as other's simply cannot do it, or it'd take them twice/three times longer to do it. My boss is always seeking stuff for us to build for people who haven't asked for it, which usually get's thrown to me as I have the most experience and can pick new API's (etc) up quicker. He doesn't give us proper requirements, we don't get time to design properly before we code, he wants us to throw something (quick and dirty as he calls it) together so we can get it out ASAP. I take pride in my work so I like to do it properly, make my code clean, maintainable etc, and I train the other guys in the team to do the same. But, we always fall on our faces. The customer we drop the apps on say it doesn't do what they need (due to few requirements), or my boss doesn't like it/changes the spec, so we have to rework it, it get's drawn out, and it makes us and me look and feel like fools. We then get accused by boss of not being reactive enough to change. I've had enough. In order to get my skills and knowledge gap's filled, I've been reading Code Complete 2nd Ed (McConnell) and the Head First Design Patterns books. I'm forcing myself to move into C# from VB at home to broaden my horizons. I'm not sure where to go from here. I don't want to code all my life as I'd like to move into a higher level design/architects role at some point in time (I'm 35). Where do I/can I go from here?

    Read the article

  • StackOverflow Careers now includes user activity from CodePlex

    Stack Overflow Careers is an innovative new job site for programmers.  One thing they have recognized is that participation in open source projects is a great way for potential employers to learn more about a job candidate, and also give developers a new way of differentiating themselves with employers.  So they have now announced the ability to automatically incorporate your work on CodePlex projects into your StackOverflow Careers 2.0 profile. We provide a secure way for StackOverflow to confirm that you are indeed a member of your CodePlex projects, and then display those projects on your profile. Additionally, since StackOverflow Careers is invitation only, they have provided a mechanism where you can prequalify to join if you are an active developer on CodePlex. You can check to see if you prequalify here.

    Read the article

  • SSIS: Deploying OLAP cubes using C# script tasks and AMO

    - by DrJohn
    As part of the continuing series on Building dynamic OLAP data marts on-the-fly, this blog entry will focus on how to automate the deployment of OLAP cubes using SQL Server Integration Services (SSIS) and Analysis Services Management Objects (AMO). OLAP cube deployment is usually done using the Analysis Services Deployment Wizard. However, this option was dismissed for a variety of reasons. Firstly, invoking external processes from SSIS is fraught with problems as (a) it is not always possible to ensure SSIS waits for the external program to terminate; (b) we cannot log the outcome properly and (c) it is not always possible to control the server's configuration to ensure the executable works correctly. Another reason for rejecting the Deployment Wizard is that it requires the 'answers' to be written into four XML files. These XML files record the three things we need to change: the name of the server, the name of the OLAP database and the connection string to the data mart. Although it would be reasonably straight forward to change the content of the XML files programmatically, this adds another set of complication and level of obscurity to the overall process. When I first investigated the possibility of using C# to deploy a cube, I was surprised to find that there are no other blog entries about the topic. I can only assume everyone else is happy with the Deployment Wizard! SSIS "forgets" assembly references If you build your script task from scratch, you will have to remember how to overcome one of the major annoyances of working with SSIS script tasks: the forgetful nature of SSIS when it comes to assembly references. Basically, you can go through the process of adding an assembly reference using the Add Reference dialog, but when you close the script window, SSIS "forgets" the assembly reference so the script will not compile. After repeating the operation several times, you will find that SSIS only remembers the assembly reference when you specifically press the Save All icon in the script window. This problem is not unique to the AMO assembly and has certainly been a "feature" since SQL Server 2005, so I am not amazed it is still present in SQL Server 2008 R2! Sample Package So let's take a look at the sample SSIS package I have provided which can be downloaded from here: DeployOlapCubeExample.zip  Below is a screenshot after a successful run. Connection Managers The package has three connection managers: AsDatabaseDefinitionFile is a file connection manager pointing to the .asdatabase file you wish to deploy. Note that this can be found in the bin directory of you OLAP database project once you have clicked the "Build" button in Visual Studio TargetOlapServerCS is an Analysis Services connection manager which identifies both the deployment server and the target database name. SourceDataMart is an OLEDB connection manager pointing to the data mart which is to act as the source of data for your cube. This will be used to replace the connection string found in your .asdatabase file Once you have configured the connection managers, the sample should run and deploy your OLAP database in a few seconds. Of course, in a production environment, these connection managers would be associated with package configurations or set at runtime. When you run the sample, you should see that the script logs its activity to the output screen (see screenshot above). If you configure logging for the package, then these messages will also appear in your SSIS logging. Sample Code Walkthrough Next let's walk through the code. The first step is to parse the connection string provided by the TargetOlapServerCS connection manager and obtain the name of both the target OLAP server and also the name of the OLAP database. Note that the target database does not have to exist to be referenced in an AS connection manager, so I am using this as a convenient way to define both properties. We now connect to the server and check for the existence of the OLAP database. If it exists, we drop the database so we can re-deploy. svr.Connect(olapServerName); if (svr.Connected) { // Drop the OLAP database if it already exists Database db = svr.Databases.FindByName(olapDatabaseName); if (db != null) { db.Drop(); } // rest of script } Next we start building the XMLA command that will actually perform the deployment. Basically this is a small chuck of XML which we need to wrap around the large .asdatabase file generated by the Visual Studio build process. // Start generating the main part of the XMLA command XmlDocument xmlaCommand = new XmlDocument(); xmlaCommand.LoadXml(string.Format("<Batch Transaction='false' xmlns='http://schemas.microsoft.com/analysisservices/2003/engine'><Alter AllowCreate='true' ObjectExpansion='ExpandFull'><Object><DatabaseID>{0}</DatabaseID></Object><ObjectDefinition/></Alter></Batch>", olapDatabaseName));  Next we need to merge two XML files which we can do by simply using setting the InnerXml property of the ObjectDefinition node as follows: // load OLAP Database definition from .asdatabase file identified by connection manager XmlDocument olapCubeDef = new XmlDocument(); olapCubeDef.Load(Dts.Connections["AsDatabaseDefinitionFile"].ConnectionString); // merge the two XML files by obtain a reference to the ObjectDefinition node oaRootNode.InnerXml = olapCubeDef.InnerXml;   One hurdle I had to overcome was removing detritus from the .asdabase file left by the Visual Studio build. Through an iterative process, I found I needed to remove several nodes as they caused the deployment to fail. The XMLA error message read "Cannot set read-only node: CreatedTimestamp" or similar. In comparing the XMLA generated with by the Deployment Wizard with that generated by my code, these read-only nodes were missing, so clearly I just needed to strip them out. This was easily achieved using XPath to find the relevant XML nodes, of which I show one example below: foreach (XmlNode node in rootNode.SelectNodes("//ns1:CreatedTimestamp", nsManager)) { node.ParentNode.RemoveChild(node); } Now we need to change the database name in both the ID and Name nodes using code such as: XmlNode databaseID = xmlaCommand.SelectSingleNode("//ns1:Database/ns1:ID", nsManager); if (databaseID != null) databaseID.InnerText = olapDatabaseName; Finally we need to change the connection string to point at the relevant data mart. Again this is easily achieved using XPath to search for the relevant nodes and then replace the content of the node with the new name or connection string. XmlNode connectionStringNode = xmlaCommand.SelectSingleNode("//ns1:DataSources/ns1:DataSource/ns1:ConnectionString", nsManager); if (connectionStringNode != null) { connectionStringNode.InnerText = Dts.Connections["SourceDataMart"].ConnectionString; } Finally we need to perform the deployment using the Execute XMLA command and check the returned XmlaResultCollection for errors before setting the Dts.TaskResult. XmlaResultCollection oResults = svr.Execute(xmlaCommand.InnerXml);  // check for errors during deployment foreach (Microsoft.AnalysisServices.XmlaResult oResult in oResults) { foreach (Microsoft.AnalysisServices.XmlaMessage oMessage in oResult.Messages) { if ((oMessage.GetType().Name == "XmlaError")) { FireError(oMessage.Description); HadError = true; } } } If you are not familiar with XML programming, all this may all seem a bit daunting, but perceiver as the sample code is pretty short. If you would like the script to process the OLAP database, simply uncomment the lines in the vicinity of Process method. Of course, you can extend the script to perform your own custom processing and to even synchronize the database to a front-end server. Personally, I like to keep the deployment and processing separate as the code can become overly complex for support staff.If you want to know more, come see my session at the forthcoming SQLBits conference.

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >