Search Results

Search found 15081 results on 604 pages for 'passing by reference'.

Page 448/604 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • SQL SERVER – Saturday Fun Puzzle with SQL Server DATETIME2 and CAST

    - by pinaldave
    Note: I have used SQL Server 2012 for this small fun experiment. Here is what we are going to do. We will run the script one at time instead of running them all together and try to guess the answer. I am confident that many will get it correct but if you do not get correct, you learn something new. Let us create database and sample table. CREATE DATABASE DB2012 GO USE DB2012 GO CREATE TABLE TableDT (DT1 VARCHAR(100), DT2 DATETIME2, DT1C AS DT1, DT2C AS DT2); INSERT INTO TableDT (DT1, DT2) SELECT GETDATE(), GETDATE() GO There are four columns in the table. The first column DT1 is regular VARCHAR and second DT2 is DATETIME2. Both of the column are been populated with the same data as I have used the function GETDATE(). Now let us do the SELECT statement and get the result from both the columns. Before running the query please guess the answer and write it down on the paper or notepad. Question 1: Guess the resultset SELECT DT1, DT2 FROM TableDT GO Now once again run the select statement on the same table but this time retrieve the computed columns only. Once again I suggest you write down the result on the notepad. Question 2: Guess the resultset SELECT DT1C, DT2C FROM TableDT GO Now here is the best part. Let us use the CAST function over the computed columns. Here I do want you to stop and guess the answer for sure. If you have not done it so far, stop do it, believe me you will like it. Question 3: Guess the resultset SELECT CAST(DT1C AS DATETIME2) CDT1C, CAST(DT2C AS DATETIME2) CDT1C FROM TableDT GO Now let us inspect all the answers together and see how many of you got it correct. Answer 1: Answer 2: Answer 3:  If you have not tried to run the script so far, you can execute all the three of the above script together over here and see the result together. SELECT CAST(DT1C AS DATETIME2) CDT1C, CAST(DT2C AS DATETIME2) CDT1C FROM TableDT GO Here is the Saturday Fun question to you – why do we get same result from both of the expressions in Question 3, where as in question 2 both the expression have different answer. I will publish the valid answer with explanation in future blog posts. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Move Database Files MDF and LDF to Another Location

    - by pinaldave
    When a novice DBA or Developer create a database they use SQL Server Management Studio to create new database. Additionally, the T-SQL script to create a database is very easy as well. You can just write CREATE DATABASE DatabaseName and it will create new database for you. The point to remember here is that it will create the database at the default location specified for SQL Server Instance (this default instance can be changed and we will see that in future blog posts). Now, once the database goes in production it will start to grow. It is not common to keep the Database on the same location where OS is installed. Usually Database files are on SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. Let us assume we have two folders loc1 and loc2. We want to move database files from loc1 to loc2. USE MASTER; GO -- Take database in single user mode -- if you are facing errors -- This may terminate your active transactions for database ALTER DATABASE TestDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO -- Detach DB EXEC MASTER.dbo.sp_detach_db @dbname = N'TestDB' GO Now move the files from loc1 to loc2. You can now reattach the files with new locations. -- Move MDF File from Loc1 to Loc 2 -- Re-Attached DB CREATE DATABASE [TestDB] ON ( FILENAME = N'F:\loc2\TestDB.mdf' ), ( FILENAME = N'F:\loc2\TestDB_log.ldf' ) FOR ATTACH GO Well, we are done. There is little warning here for you: If you do ROLLBACK IMMEDIATE you may terminate your active transactions so do not use it randomly. Do it if you are confident that they are not needed or due to any reason there is a connection to the database which you are not able to kill manually after review. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • BizTalk Envelopes explained

    - by Robert Kokuti
    Recently I've been trying to get some order into an ESB-BizTalk pub/sub scenario, and decided to wrap the payload into standardized envelopes. I have used envelopes before in a 'light weight' fashion, and I found that they can be quite useful and powerful if used systematically. Here is what I learned. The Theory In my experience, Envelopes are often underutilised in a BizTalk solution, and quite often their full potential is not well understood. Here I try to simplify the theory behind the Envelopes within BizTalk.   Envelopes can be used to attach additional data to the ‘real’ data (payload). This additional data can contain all routing and processing information, and allows treating the business data as a ‘black box’, possibly compressed and/or encrypted etc. The point here is that the infrastructure does not need to know anything about the business data content, just as a post man does not need to know the letter within the envelope. BizTalk has built-in support for envelopes through the XMLDisassembler and XMLAssembler pipeline components (these are part of the XMLReceive and XMLSend default pipelines). These components, among other things, perform the following: XMLDisassembler Extracts the payload from the envelope into the Message Body Copies data from the envelope into the message context, as specified by the property schema(s) associated by the envelope schema. Typically, once the envelope is through the XMLDisassembler, the payload is submitted into the Messagebox, and the rest of the envelope data are copied into the context of the submitted message. The XMLDisassembler uses the Property Schemas, referenced by the Envelope Schema, to determine the name of the promoted Message Context element.   XMLAssembler Wraps the Message Body inside the specified envelope schema Populates the envelope values from the message context, as specified by the property schema(s) associated by the envelope schema. Notice that there are no requirements to use the receiving envelope schema when sending. The sent message can be wrapped within any suitable envelope, regardless whether the message was originally received within an envelope or not. However, by sharing Property Schemas between Envelopes, it is possible to pass values from the incoming envelope to the outgoing envelope via the Message Context. The Practice Creating the Envelope Add a new Schema to the BizTalk project:   Envelopes are defined as schemas, with the <Schema> Envelope property set to Yes, and the root node’s Body XPath property pointing to the node which contains the payload. Typically, you’d create an envelope structure similar to this: Click on the <Schema> node and set the Envelope property to Yes. Then, click on the Envelope node, and set the Body XPath property pointing to the ‘Body’ node:   The ‘Body’ node is a Child Element, and its Data Structure Type is set to xs:anyType.  This allows the Body node to carry any payload data. The XMLReceive pipeline will submit the data found in the Body node, while the XMLSend pipeline will copy the message into the Body node, before sending to the destination. Promoting Properties Once you defined the envelope, you may want to promote the envelope data (anything other than the Body) as Property Fields, in order to preserve their value in the message context. Anything not promoted will be lost when the XMLDisassembler extracts the payload from the Body. Typically, this means you promote everything in the Header node. Property promotion uses associated Property Schemas. These are special BizTalk schemas which have a flat field structure. Property Schemas define the name of the promoted values in the Message Context, by combining the Property Schema’s Namespace and the individual Field names. It is worth being systematic when it comes to naming your schemas, their namespace and type name. A coherent method will make your life easier when it comes to referencing the schemas during development, and managing subscriptions (filters) in BizTalk Administration. I developed a fairly coherent naming convention which I’ll probably share in another article. Because the property schema must be flat, I recommend creating one for each level in the envelope header hierarchy. Property schemas are very useful in passing data between incoming as outgoing envelopes. As I mentioned earlier, in/out envelopes do not have to be the same, but you can use the same property schema when you promote the outgoing envelope fields as you used for the incoming schema.  As you can reference many property schemas for field promotion, you can pick data from a variety of sources when you define your outgoing envelope. For example, the outgoing envelope can carry some of the incoming envelope’s promoted values, plus some values from the standard BizTalk message context, like the AdapterReceiveCompleteTime property from the BizTalk message-tracking properties. The values you promote for the outgoing envelope will be automatically populated from the Message Context by the XMLAssembler pipeline component. Using the Envelope Receiving Enveloped messages are automatically recognized by the XMLReceive pipeline, or any other custom pipeline which includes the XMLDisassembler component. The Body Path node will become the Message Body, while the rest of the envelope values will be added to the Message context, as defined by the Property Shemas referenced by the Envelope Schema. Sending The Send Port’s filter expression can use the promoted properties from the incoming envelope. If you want to enclose the sent message within an envelope, the Send Port XMLAssembler component must be configured with the fully qualified envelope name:   One way of obtaining the fully qualified envelope name is copy it off from the envelope schema property page: The full envelope schema name is constructed as <Name>, <Assembly> The outgoing envelope is populated by the XMLAssembler pipeline component. The Message Body is copied to the specified envelope’s Body Path node, while the rest of the envelope fields are populated from the Message Context, according to the Property Schemas associated with the Envelope Schema. That’s all for now, happy enveloping!

    Read the article

  • Supporting users if they're not on your site

    - by Roger Hart
    Have a look at this Read Write Web article, specifically the paragraph in bold and the comments. Have a wry chuckle, or maybe weep for the future of humanity - your call. Then pause, and worry about information architecture. The short story: Read Write Web bumps up the Google rankings for "Facebook login" at the same time as Facebook makes UI changes, and a few hundred users get confused and leave comments on Read Write Web complaining about not being able to log in to their Facebook accounts.* Blindly clicking the first Google result is not a navigation behaviour I'd anticipated for folks visiting big names sites like Facebook. But then, I use Launchy and don't know where any of my files are, depend on Firefox auto-complete, view Facebook through my IM client, and don't need a map to find my backside with both hands. Not all our users behave in the same way, which means not all of our architecture is within our control, and people can get to your content in all sorts of ways. Even if the Read Write Web episode is a prank of some kind (there are, after all, plenty of folks who enjoy orchestrated trolling) it's still a useful reminder. Your users may take paths through and to your content you cannot control, and they are unlikely to deconstruct their assumptions along the way. I guess the meaningful question is: can you still support those users? If they get to you from Google instead of your front door, does what they find still make sense? Does your information architecture still work if your guests come in through the bathroom window? Ok, so here they broke into the house next door - you can't be expected to deal with that. But the rest is well worth thinking about. Other off-site interaction It's rarely going to be as funny as the comments at Read Write Web, but your users are going to do, say, and read things they think of as being about you and your products, in places you don't control. That's good. If you pay attention to it, you get data. Your users get a better experience. There are easy wins, too. Blogs, forums, social media &c. People may look for and find help with your product on blogs and forums, on Twitter, and what have you. They may learn about your brand in the same way. That's fine, it's an interaction you can be part of. It's time-consuming, certainly, but you have the option. You won't get a blogger to incorporate your site navigation just in case your users end up there, but you can be there when they do. Again, Anne Gentle, Gordon McLean and others have covered this in more depth than I could. Direct contact Sales people, customer care, support, they all talk to people. Are they sending links to your content? if so, which bits? Do they know about all of it? Do they have the content they need to support them - messaging that funnels sales, FAQ that are realistically frequent, detailed examples of things people want to do, that kind of thing. Are they sending links because users can't find the good stuff? Are they sending précis of your content, or re-writes, or brand new stuff? If so, does that mean your content isn't up to scratch, or that you've got content missing? Direct sales/care/support interactions are enormously valuable, and can help you know what content your users find useful. You can't have a table of contents or a "See also" in a phonecall, but your content strategy can support more interactions than browsing. *Passing observation about Facebook. For plenty if folks, it is  the internet. Its services are simple versions of what a lot of people use the internet for, and they're aggregated into one stop. Flickr, Vimeo, Wordpress, Twitter, LinkedIn, and all sorts of games, have Facebook doppelgangers that are not only friendlier to entry-level users, they're right there, behind only one layer of authentication. As such, it could own a lot of interaction convention. Heavy users may well not be tech-savvy, and be quite change averse. That doesn't make this episode not dumb, but I'm happy to go easy on 'em.

    Read the article

  • SQLAuthority News – Updates on Contests, Books and SQL Server

    - by pinaldave
    There are lots of things happening on this blog and I feel sometime it is difficult to keep up. One of the suggestion I keep on receiving if there is a single page where one can visit and know the updates. I did consider of the same at some point but in era of RSS Feed it is difficult to have proper audience to that page. Here are few updates on various contest and books give away in recent time. Combo set of 5 Joes 2 Pros Book – 1 for YOU and 1 for Friend – I have received so many entries for this contest. Many have sent me email asking if this contest can be extended by couple of days. For the same the deadline for this contest is now Nov 10th 7 AM. You can send your entries by that time. The prize is 2 combo set of Joes 2 Pros is of USD 444. If you have not take part in the contest please take part now. Guess What is in the box? – There were many entry for this contest. We played this contest on blog as well, facebook. The answer of this contest was announced in 2 days in blog post announcing my new book. The winner was Manas Dash from Bangalore. He answered “The box will contain SQL book authored by Vinod and Pinal”. This was the closest answer we received. Win 5 SQL Programming Book Contest will have winner announced by Nov 15th and winners will be sent email. Win 5 SQL Wait Stats Book Contest is closed and winners have been sent their award. My third book SQL Server Interview Questions and Answers have run out of stock in India in 36 hours of its launch. We are working very hard to make it available again. Thank you again for excellent support! Without your participation all the give away have no significance. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – How to Force New Cardinality Estimation or Old Cardinality Estimation

    - by Pinal Dave
    After reading my initial two blog posts on New Cardinality Estimation, I received quite a few questions. Once I receive this question, I felt I should have clarified it earlier few things when I started to write about cardinality. Before continuing this blog, if you have not read it before I suggest you read following two blog posts. SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014 SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072 Q: Does new cardinality will improve performance of all of my queries? A: Remember, there is no 0 or 1 logic when it is about estimation. The general assumption is that most of the queries will be benefited by new cardinality estimation introduced in SQL Server 2014. That is why the generic advice is to set the compatibility level of the database to 120, which is for SQL Server 2014. Q: Is it possible that after changing cardinality estimation to new logic by setting the value to compatibility level to 120, I get degraded performance for few queries? A: Yes, it is possible. However, the number of the queries where this impact should be very less. Q: Can I still run my database in older compatibility level and force few queries to newer cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 2312 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 2312);; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; GO Q: Can I run my database in newer compatibility level and force few queries to older cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 9481 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 9481); GO I guess, I have covered most of the questions so far I have received. If I have missed any questions, please send me again and I will include the same. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Basics of a Query Hint

    - by pinaldave
    This blog post is inspired from SQL Architecture Basics Joes 2 Pros: Core Architecture concepts – SQL Exam Prep Series 70-433 – Volume 3. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Basics of a Query Hint – A Primer. In the article we discussed various basics terminology of the query hints. The article further covers following important concepts of query hints. Expecting Seek and getting a Scan Creating an index for improved optimization Implementing the query hint Above three are the most important concepts related to query hint and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) You have the following query: DECLARE @UlaChoice TinyInt SET @Type = 1 SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice You have a nonclustered index named IX_Legal_Ula on the UlaChoice field. The Primary key is on the ID field and called PK_Legal_ID 99% of the time the value of the @UlaChoice is set to ‘YP101′. What query will achieve the best optimization for this query? SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice WITH(INDEX(X_Legal_Ula)) SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice WITH(INDEX(PK_Legal_ID)) SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice OPTION (Optimize FOR(@UlaChoice = ‘YP101′)) 2) You have the following query: SELECT * FROM CurrentProducts WHERE ShortName = ‘Yoga Trip’ You have a nonclustered index on the ShortName field and the query runs an efficient index seek. You change your query to use a variable for ShortName and now you are using a slow index scan. What query hint can you use to get the same execution time as before? WITH LOCK FAST OPTIMIZE FOR MAXDOP READONLY Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 4 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • concurrency::accelerator

    - by Daniel Moth
    Overview An accelerator represents a "target" on which C++ AMP code can execute and where data can reside. Typically (but not necessarily) an accelerator is a GPU device. Accelerators are represented in C++ AMP as objects of the accelerator class. For many scenarios, you do not need to obtain an accelerator object, since the runtime has a notion of a default accelerator, which is what it thinks is the best one in the system. Examples where you need to deal with accelerator objects are if you need to pick your own accelerator (based on your specific criteria), or if you need to use more than one accelerators from your app. Construction and operator usage You can query and obtain a std::vector of all the accelerators on your system, which the runtime discovers on startup. Beyond enumerating accelerators, you can also create one directly by passing to the constructor a system-wide unique path to a device if you know it (i.e. the “Device Instance Path” property for the device in Device Manager), e.g. accelerator acc(L"PCI\\VEN_1002&DEV_6898&SUBSYS_0B001002etc"); There are some predefined strings (for predefined accelerators) that you can pass to the accelerator constructor (and there are corresponding constants for those on the accelerator class itself, so you don’t have to hardcode them every time). Examples are the following: accelerator::default_accelerator represents the default accelerator that the C++ AMP runtime picks for you if you don’t pick one (the heuristics of how it picks one will be covered in a future post). Example: accelerator acc; accelerator::direct3d_ref represents the reference rasterizer emulator that simulates a direct3d device on the CPU (in a very slow manner). This emulator is available on systems with Visual Studio installed and is useful for debugging. More on debugging in general in future posts. Example: accelerator acc(accelerator::direct3d_ref); accelerator::direct3d_warp represents a target that I will cover in future blog posts. Example: accelerator acc(accelerator::direct3d_warp); accelerator::cpu_accelerator represents the CPU. In this first release the only use of this accelerator is for using the staging arrays technique that I'll cover separately. Example: accelerator acc(accelerator::cpu_accelerator); You can also create an accelerator by shallow copying another accelerator instance (via the corresponding constructor) or simply assigning it to another accelerator instance (via the operator overloading of =). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator objects between them to determine if they refer to the same underlying device. Querying accelerator characteristics Given an accelerator object, you can access its description, version, device path, size of dedicated memory in KB, whether it is some kind of emulator, whether it has a display attached, whether it supports double precision, and whether it was created with the debugging layer enabled for extensive error reporting. Below is example code that accesses some of the properties; in your real code you'd probably be checking one or more of them in order to pick an accelerator (or check that the default one is good enough for your specific workload): void inspect_accelerator(concurrency::accelerator acc) { std::wcout << "New accelerator: " << acc.description << std::endl; std::wcout << "is_debug = " << acc.is_debug << std::endl; std::wcout << "is_emulated = " << acc.is_emulated << std::endl; std::wcout << "dedicated_memory = " << acc.dedicated_memory << std::endl; std::wcout << "device_path = " << acc.device_path << std::endl; std::wcout << "has_display = " << acc.has_display << std::endl; std::wcout << "version = " << (acc.version >> 16) << '.' << (acc.version & 0xFFFF) << std::endl; } accelerator_view In my next blog post I'll cover a related class: accelerator_view. Suffice to say here that each accelerator may have from 1..n related accelerator_view objects. You can get the accelerator_view from an accelerator via the default_view property, or create new ones by invoking the create_view method that creates an accelerator_view object for you (by also accepting a queuing_mode enum value of deferred or immediate that we'll also explore in the next blog post). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • SQL SERVER – Inviting Ideas for SQL in Sixty Seconds – 12/12/12

    - by pinaldave
    Today is 12/12/12 – I am not sure when will I write this kind of date again – maybe never. This opportunity comes once in a lifetime when we have the same date, month and year all have same digit. December 12th is one of the most fantastic day in my personal life. Four years ago, this day I got married to my wife – Nupur Dave.  Here are photos of our wedding (Dec 12, 2008). Here is a very interesting photo of myself earlier this year. It is not photoshoped or modified photo. The only modification I have done here is to add arrow and speech bubble. Every Wednesday I tried to put one SQL in Sixty Seconds Video. The journey has been fantastic and so far I have put a total of 35 SQL in Sixty Seconds Video. The goal of the video is to learn something in 1 minute. In our daily life we are all very busy and hardly have time for anything. No matter how much we are busy – we all have one minute of time. Sometime we wait for a minute in elevators, at the escalator, at a coffee shop, or just waiting for our phone reboot. Today is a fantastic day – 12/12/12. Let me invite all of you submits SQL in Sixty Seconds idea. If I like your idea and create a sixty second video over it – you will win surprise learning material from me. There are two very simple rules of the contest: - I should have not have already recorded the tip. The tip should be descriptive. Do not just suggest to cover “Performance Tuning” or “How to Create Index” or “More of reporting services”. The tip should have around 100 words of description explaining SQL Tip. The contest is open forever. The winner will be announced whenever I use the tip to convert to video. If I use your tip, I will for sure mention in the blog post that it is inspired from your suggestion. Meanwhile, do not forget to subscribe YouTube Channel. Here are my latest three videos from SQL in Sixty Seconds. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQLAuthority News – Weekend Experiment with NuoDB – Points to Pondor and Whitepaper

    - by pinaldave
    This weekend I have downloaded the latest beta version of NuoDB. I found it much improved and better UI. I was very much impressed as the installation was very smooth and I was up and running in less than 5 minutes with the product. The tools which are related to the Administration of the NuoDB seems to get makeover during this beta release. As per the claim they support now Solaris platform and have improved the native MacOS installation. I neither have Mac nor Solaris – I wish I would have experimented with the same. I will appreciate if anyone out there can confirm how the installations goes on these platforms. I have previously blogged about my experiment with NuoDB here: SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database SQL SERVER – Beginning NuoDB – Who will Benefit and How to Start SQL SERVER – Follow up on Beginning NuoDB – Who will Benefit and How to Start – Part 2 I am very impressed with the product so far and I have decided to understand the product further deep. Here are few of the questions which I am going to try to find answers with regards to NuoDB. Just so it is clear – NuoDB is not NOSQL, matter of the fact, it is following all the ACID properties of the database. If ACID properties are crucial why many NoSQL products are not adhering to it? (There are few out there do follow ACID but not all). I do understand the scalability of the database however does elasticity is crucial for the database and if yes how? (Elasticity is where the workload on the database is heavily fluctuating and the need of more than a single database server is coming up). How NuoDB has built scalable, elastic and 100% ACID compliance database which supports multiple platforms? How is NOSQL compared to NuoDB’s new architecture? In the next coming weeks, I am going to explore above concepts and dive deeper into the understanding of the same. Meanwhile I have read following white paper written by Experts at University of California at Santa Barbara. Very interesting read and great starter on the subject Database Scalability, Elasticity, and Autonomy in the Cloud. Additionally, my questions are also talking about NoSQL, this weekend I have started to learn about NoSQL from Pluralsight‘s online learning library. I will share my experience very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: NuoDB

    Read the article

  • Simple tips to design a Customer Journey Map

    - by Isabel F. Peñuelas
    “A model can abstract to a level that is comprehensible to humans, without getting lost in details.” -The Unified Modeling Language Reference Manual. Inception using Post-it, StoryBoards, Lego or Mindmaping Techniques The first step in a Customer Experience project is to describe customer interactions creating a customer journey map. Modeling is never easy, so to succeed on this effort, it is very convenient that your CX´s team have some “abstract thinking” skills. Besides is very helpful to consult a Business Service Design offered by an Interactive Agency to lead your inception process. Initially, you may start by a free discussion using post-it cards; storyboards; even lego or any other brainstorming technique you like. This will help you to get your mind into the path followed by the customer to purchase your product or to consume any business service you actually offer to your customers, or plan to offer in the near future. (from www.servicedesigntools.org) Colorful Mind Maps are very useful to document and share meeting ideas. Some Mind Maps software providers as ThinkBuzzan provide trial versions, and you will find more mindmapping options on this post by Mashable. Finally to produce a quick one, I do recommend Wise, an entirely online mindmaping service. On my view the best results in terms of communication will always come for an artistic hand-made drawing. Customer Experience Mind Map Example Making your first Customer Journey Map To add some more formalization to your thoughts, there is a wide offering for designing Customer Journey Maps. A Customer Map can be represented as an oriented graph in which another follows each step. The one below is the most simple Customer Journey you can draw. Nothing more than a couple of pictures, numbers and lines to design the customer steps sequence in the purchase process. Very simple Customer Journey for Social Mobile Shopping There are a lot of Customer Journey templates much more sophisticated available  in the Web using a variety of styles, as per example this one with a focus on underlining emotional experience, or this other worksheet template. Representing different interaction devices on the vertical axis, and touchpoints / requirements and existing gaps horizontally  is today´s most common format for Customer Journeys. From Customer Journey Maps to CX Technology Adoption Plans Once you have your map ready, you can start to identify the IT infrastructure requirements for your CXProject. By analyzing customer problems and improvement opportunities with maps, you will then identify the technology gaps and the new investment requirements in your IT infrastructure. Deeping step by step from the more abstract to the more concrete is the best guarantee to take the right IT investment decisions.  ¡Remember to keep your initial customer journey safe on your pocket in every one of your CX´s project meetings- that´s you map to success!

    Read the article

  • SQL SERVER – Order By Numeric Values Formatted as String

    - by pinaldave
    When I was writing this blog post I had a hard time to come up with the title of the blog post so I did my best to come up with one. Here is the reason why? I wrote a blog post earlier SQL SERVER – Find First Non-Numeric Character from String. One of the questions was that how that blog can be useful in real life scenario. This blog post is the answer to that question. Let us first see a problem. We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with example. Prepare a sample data: -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO The above query will give following result set. Now let us use ORDER BY COL1 and observe the result along with Original SELECT. -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO The result of the table is not as per expected. We need the result in following format. Here is the good example of how we can use PATINDEX. -- Use of PATINDEX SELECT ID, LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) 'Numeric Character', Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO We can use PATINDEX to identify the length of the digit part in the alphanumeric string (Remember: Our string has a first part as an int always. It will not work in any other scenario). Now you can use the LEFT function to extract the INT portion from the alphanumeric string and order the data according to it. You can easily clean up the script by dropping following table. DROP TABLE MyTable GO Here is the complete script so you can easily refer it. -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO -- Use of PATINDEX SELECT ID, Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO DROP TABLE MyTable GO Well, isn’t it an interesting solution. Any suggestion for better solution? Additionally any suggestion for changing the title of this blog post? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL String, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – A Quick Note on @Pluralsight Video – Call Me Maybe Developer Way

    - by pinaldave
    I write a lot about how important learning and training is.  Any of my readers will know that I think the key to success is staying current with your education and taking very opportunity to increase your “tool kit” of skills.  I hope that I have not made the impression that it is all in the employees hands to make sure they are happy and satisfied at their jobs. I also firmly believe that a good boss will make good employees.  A boss who is good at communicating,  and leading, who knows how to nip problem in the bud and allocate resources wisely will have a well-oiled machine.  This means happy employees and a great work environment. It is important to have a healthy work environment because you will not succeed without one.  Successful business will always have the type of environment that fosters creativity and has efficient employees.  A healthy environment doesn’t force employees to produce results, but allows them to progress and create the results themselves. The result of a healthy work environment is that employees will enjoy their work and then work harder.  This can bring the company more revenue, and hopefully the employees will see the result of their hard work in bonuses and raises.  However, money is important but it is certainly secondary – the important part is the dedication of the employees to their work and to their company.  This is the true key to success. Any employee who recognizes this description as their working environment should consider themselves fortunate.  They are allowed to grow and do better, and employees being treated fairly can be a rarity in this world.  One company that I believe adheres to this principle is Pluralsight – as evidenced by this fun video. I have blogged about it earlier. (check out my cameo at 0:37). It was great fun to work with the employees at Pluralsight while making this video.  They are a great bunch and clearly have a great work environment – we wouldn’t have had this much fun if not!  I have to tell you a little bit about making this video.  My wife shot it with her mobile phone, which was certainly a different but exciting experience!  It was hard to get the look of the video right, since I was trying to portray a body builder – this was a little outside of my own personal experience.  I have what I like to call a “healthy” body type, so trying to look extremely fit like some of the other “actors” in this video was a challenge – but I do hope that you all think I succeeded.  All in all, it was great fun to participate in this video and I hope to see my friends at Pluralsight again soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Silverlight Cream for April 11, 2010 -- #836

    - by Dave Campbell
    In this Issue: Rénald Nollet, Roboblob, Laurent Bugnion, Timmy Kokke, Michael Sync(-2-), Victor Gaudioso, and Bill Reiss. Brought to you from a tiny table in my no-tell-motelTM in 'Vegas AKA "cheaper than anywhere else" and the WiFi is free and smokin'... From SilverlightCream.com: Sync your Silverlight out-of-browser application data without service but with Dropbox Rénald Nollet is in good company (Walt Ritscher) because he's demo'ing synching OOB apps with dropbox. Unit Testable WCF Web Services in MVVM and Silverlight 4 Roboblob is discussing calling WCF Web Services from a Silverlight MVVM app... something he's been avoiding up to now. Good long code and text-filled article, with the project to download. Using commands with ApplicationBarMenuItem and ApplicationBarButton in Windows Phone 7 I almost missed this post by Laurent Bugnion, on how to get around the problem of not being able to attach commands to the ApplicationBarMenuItem and ApplicationBarButton in WP7. He gets around it with (gasp) code behind :) Introducing jLight – Talking to the DOM using Silverlight and jQuery. Oh boy... if you haven't been using jQuery yet, this should get it going for you... Timmy Kokke has produced jLight which brings jQuery into Silverlight for ease of DOM interaction... and it's on CodePlex! Test-Driven Development in Windows Phone7 – Part 1: Unit Testing with Silverlight for Phone7 Michael Sync's starting a series on TDD with Silverlight on WP7. Great tutorial and all the code is available. Tip: “Object reference not set to an instance of an object” Error in Silverlight for Windows Phone 7 Michael Sync also has a tip up for the resolution of an error we've all seen in all sorts of development, but now in WP7, and the workaround is deceptively simple. New Silverlight Video Tutorial: How to Create Gradients Victor Gaudioso has a new video tutorial up for creating gradients in Expression Blend... don't fumble around in Blend... learn from the master! Space Rocks game step 8: Hyperspace Bill Reiss is up to episode 8 in his Silverlight game series now... this latest is on Hyperspace ... don'cha wish you could just do that? My trip to 'Vegas would have been a lot faster :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

    - by pinaldave
    When I decided to start writing about this wait type, the very first question that came to my mind was, “What does ‘OLEDB’ stand for?” A quick search on Wikipedia tells me that OLEDB means Object Linking and Embedding Database. (How many of you knew this?) Anyway, I found it very interesting that this wait type was in one of the top 10 wait types in many of the systems I have come across in my performance tuning experience. Books On-Line: ????OLEDB occurs when SQL Server calls the SQL Server Native Client OLE DB Provider. This wait type is not used for synchronization. Instead, it indicates the duration of calls to the OLE DB provider. OLEDB Explanation: This wait type primarily happens when Link Server or Remove Query has been executed. The most common case wherein this wait type is visible is during the execution of Linked Server. When SQL Server is retrieving data from the remote server, it uses OLEDB API to retrieve the data. It is possible that the remote system is not quick enough or the connection between them is not fast enough, leading SQL Server to wait for the result’s return from the remote (or external) server. This is the time OLEDB wait type occurs. Reducing OLEDB wait: Check the Link Server configuration. Checking Disk-Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) At this point in time, I am not able to think of any more ways on reducing this wait type. Do you have any opinion about this subject? Please share it here and I will share your comment with the rest of the Community, and of course, with due credit unto you. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Preserve Leading Zero While Coping to Excel from SSMS

    - by pinaldave
    Earlier I wrote two articles about how to efficiently copy data from SSMS to Excel. Since I wrote that post there are plenty of interest generated on this subject. There are a few questions I keep on getting over this subject. One of the question is how to get the leading zero preserved while copying the data from SSMS to Excel. Well it is almost the same way as my earlier post SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet. The key here is in EXCEL and not in SQL Server. The step here is to change the format of Excel Cell to Text from Numbers and that will preserve the value of the with leading or trailing Zeros in Excel. However, I assume this is done for display purpose only because once you convert column to Text you may find it difficult to do numeric operations over the column for example Aggregation, Average etc. If you need to do the same you should either convert the columns back to Numeric in Excel or do the process in Database and export the same value as along with it as well. However, I have seen in requirement in the real world where the user has to have a numeric value with leading Zero values in it for display purpose. Here is my suggestion, instead of manipulating numeric value in the database and converting it to character value the ideal thing to do is to store it as a numeric value only in the database. Whatever changes you want to do for display purpose should be handled at the time of the display using the format function of SQL or Application Language. Honestly, database is data layer and presentation is presentation layer – they are two different things and if possible they should not be mixed. If due to any reason you cannot follow above advise and you need is to have append leading zeros in the database only here are two of my previous articles I suggest you to refer them. I am open to learn new tricks as these articles are almost three years old. Please share your opinion and suggestions in the comments area. SQL SERVER – Pad Ride Side of Number with 0 – Fixed Width Number Display SQL SERVER – UDF – Pad Ride Side of Number with 0 – Fixed Width Number Display Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Excel

    Read the article

  • Refreshing imported MySQL data with MySQL for Excel

    - by Javier Rivera
    Welcome to another blog post from the MySQL for Excel Team. Today we're going to talk about a new feature included since MySQL for Excel 1.3.0, you can install the latest GA or maintenance version using the MySQL Installer or optionally you can download directly any GA or non-GA version from the MySQL Developer Zone.As some users suggested in our forums we should be maintaining the link between tables and Excel not only when editing data through the Edit MySQL Data option, but also when importing data via Import MySQL Data. Before 1.3.0 this process only provided you with an offline copy of the Table's data into Excel and you had no way to refresh that information from the DB later on. Now, with this new feature we'll show you how easy is to work with the latest available information at all times. This feature is transparent to you (it doesn't require additional steps to work as long as the users had the Create an Excel Table for the imported MySQL table data option enabled. To ensure you have this option checked, click over Advanced Options... after the Import Data dialog is displayed). The current blog post assumes you already know how to import data into excel, you could always take a look at our previous post How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel if you need further reference on that topic. After importing Data from a MySQL Table into Excel, you can refresh the data in 3 ways.1. Simply right click over the range of the imported data, to show the pop-up menu: Click over the Refresh button to obtain the latest copy of the data in the table. 2. Click the Refresh button on the Data ribbon: 3. Click the Refresh All button in the Data ribbon (beware this will refresh all Excel tables in the Workbook): Please take a note of a couple of details here, the first one is about the size of the table. If by the time you refresh the table new columns had been added to it, and you originally have imported all columns, the table will grow to the right. The same applies to rows, if the table has new rows and you did not limit the results , the table will grow to to the bottom of the sheet in Excel. The second detail you should take into account is this operation will overwrite any changes done to the cells after the table was originally imported or previously refreshed: Now with this new feature, imported data remains linked to the data source and is available to be updated at all times. It empowers the user to always be able to work with the latest version of the imported MySQL data. We hope you like this this new feature and give it a try! Remember that your feedback is very important for us, so drop us a message with your comments, suggestions for this or other features and follow us at our social media channels: MySQL on Windows (this) Blog: https://blogs.oracle.com/MySqlOnWindows/ MySQL for Excel forum: http://forums.mysql.com/list.php?172 Facebook: http://www.facebook.com/mysql YouTube channel: https://www.youtube.com/user/MySQLChannel Thanks!

    Read the article

  • SQLAuthority News – I am Presenting 2 Sessions at TechEd India

    - by pinaldave
    TechED is the event which I am always excited about. It is one of the largest technology in India. Microsoft Tech Ed India 2011 is the premier technical education and networking event for tech professionals interested in learning, connecting and exploring a broad set of current and soon-to-be released Microsoft technologies, tools, platforms and services. I am going to speak at the TechED on two very interesting and advanced subjects. Venue: The LaLiT Ashok Kumara Krupa High Grounds Bangalore – 560001, Karnataka, India Sessions Date: March 25, 2011 Understanding SQL Server Behavioral Pattern – SQL Server Extended Events Date and Time: March 25, 2011 12:00 PM to 01:00 PM History repeats itself! SQL Server 2008 has introduced a very powerful, yet very minimal reoccurring feature called Extended Events. This advanced session will teach experienced administrators’ capabilities that were not possible before. From T-SQL error to CPU bottleneck, error login to deadlocks –Extended Event can detect it for you. Understanding the pattern of events can prevent future mistakes. SQL Server Waits and Queues – Your Gateway to Perf. Troubleshooting Date and Time: March 25, 2011 04:15 PM to 05:15 PM Just like a horoscope, SQL Server Waits and Queues can reveal your past, explain your present and predict your future. SQL Server Performance Tuning uses the Waits and Queues as a proven method to identify the best opportunities to improve performance. A glance at Wait Types can tell where there is a bottleneck. Learn how to identify bottlenecks and potential resolutions in this fast paced, advanced performance tuning session. My session will be on the third day of the event and I am very sure that everybody will be in groove to learn new interesting subjects. I will have few give-away during and at the end of the session. I will not tell you what I will have but it will be for sure something you will love to have. Please make a point and reserve above time slots to attend my session. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authentication

    - by Your DisplayName here!
    This post shows some of the implementation techniques for adding token and claims based security to HTTP/REST services written with WCF. For the theoretical background, see my previous post. Disclaimer The framework I am using/building here is not the only possible approach to tackle the problem. Based on customer feedback and requirements the code has gone through several iterations to a point where we think it is ready to handle most of the situations. Goals and requirements The framework should be able to handle typical scenarios like username/password based authentication, as well as token based authentication The framework should allow adding new supported token types Should work with WCF web programming model either self-host or IIS hosted Service code can rely on an IClaimsPrincipal on Thread.CurrentPrincipal that describes the client using claims-based identity Implementation overview In WCF the main extensibility point for this kind of security work is the ServiceAuthorizationManager. It gets invoked early enough in the pipeline, has access to the HTTP protocol details of the incoming request and can set Thread.CurrentPrincipal. The job of the SAM is simple: Check the Authorization header of the incoming HTTP request Check if a “registered” token (more on that later) is present If yes, validate the token using a security token handler, create the claims principal (including claims transformation) and set Thread.CurrentPrincipal If no, set an anonymous principal on Thread.CurrentPrincipal. By default, anonymous principals are denied access – so the request ends here with a 401 (more on that later). To wire up the custom authorization manager you need a custom service host – which in turn needs a custom service host factory. The full object model looks like this: Token handling A nice piece of existing WIF infrastructure are security token handlers. Their job is to serialize a received security token into a CLR representation, validate the token and turn the token into claims. The way this works with WS-Security based services is that WIF passes the name/namespace of the incoming token to WIF’s security token handler collection. This in turn finds out which token handler can deal with the token and returns the right instances. For HTTP based services we can do something very similar. The scheme on the Authorization header gives the service a hint how to deal with an incoming token. So the only missing link is a way to associate a token handler (or multiple token handlers) with a scheme and we are (almost) done. WIF already includes token handler for a variety of tokens like username/password or SAML 1.1/2.0. The accompanying sample has a implementation for a Simple Web Token (SWT) token handler, and as soon as JSON Web Token are ready, simply adding a corresponding token handler will add support for this token type, too. All supported schemes/token types are organized in a WebSecurityTokenHandlerCollectionManager and passed into the host factory/host/authorization manager. Adding support for basic authentication against a membership provider would e.g. look like this (in global.asax): var manager = new WebSecurityTokenHandlerCollectionManager(); manager.AddBasicAuthenticationHandler((username, password) => Membership.ValidateUser(username, password));   Adding support for Simple Web Tokens with a scheme of Bearer (the current OAuth2 scheme) requires passing in a issuer, audience and signature verification key: manager.AddSimpleWebTokenHandler(     "Bearer",     "http://identityserver.thinktecture.com/trust/initial",     "https://roadie/webservicesecurity/rest/",     "WFD7i8XRHsrUPEdwSisdHoHy08W3lM16Bk6SCT8ht6A="); In some situations, SAML token may be used as well. The following configures SAML support for a token coming from ADFS2: var registry = new ConfigurationBasedIssuerNameRegistry(); registry.AddTrustedIssuer( "d1 c5 b1 25 97 d0 36 94 65 1c e2 64 fe 48 06 01 35 f7 bd db", "ADFS"); var adfsConfig = new SecurityTokenHandlerConfiguration(); adfsConfig.AudienceRestriction.AllowedAudienceUris.Add( new Uri("https://roadie/webservicesecurity/rest/")); adfsConfig.IssuerNameRegistry = registry; adfsConfig.CertificateValidator = X509CertificateValidator.None; // token decryption (read from config) adfsConfig.ServiceTokenResolver = IdentityModelConfiguration.ServiceConfiguration.CreateAggregateTokenResolver();             manager.AddSaml11SecurityTokenHandler("SAML", adfsConfig);   Transformation The custom authorization manager will also try to invoke a configured claims authentication manager. This means that the standard WIF claims transformation logic can be used here as well. And even better, can be also shared with e.g. a “surrounding” web application. Error handling A WCF error handler takes care of turning “access denied” faults into 401 status codes and a message inspector adds the registered authentication schemes to the outgoing WWW-Authenticate header when a 401 occurs. The next post will conclude with authorization as well as the source code download.   (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to SQL Error Actions

    - by pinaldave
    This blog post is inspired from SQL Programming Joes 2 Pros: Programming and Development for Microsoft SQL Server 2008 – SQL Exam Prep Series 70-433 – Volume 4. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to SQL Error Actions – A Primer. In the article we discussed various basics terminology of the error handling. The article further covers following important concepts of error handling. Introduction to SQL Error Actions Statement Termination Scope Abortion Batch Termination Above three are the most important concepts related to error handling and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1.) Which SQL Server error action happens for errors with a severity of 11-16 when you set the XACT_ABORT setting to ON? You will get Statement Termination. You will get Scope Abortion. You will get Batch Abortion. You will get Connection Termination. SQL Server will pick the error action. 2.) Which SQL Server error action happens for errors with a severity of 11-16 when you set the XACT_ABORT setting to OFF? You will get Statement Termination You will get Scope Abortion You will get Batch Abortion You will get Connection Termination SQL Server will pick the error action Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 5 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Simple Preferred time control using silverlight 3.

    - by mohanbrij
    Here I am going to show you a simple preferred time control, where you can select the day of the week and the time of the day. This can be used in lots of place where you may need to display the users preferred times. Sample screenshot is attached below. This control is developed using Silverlight 3 and VS2008, I am also attaching the source code with this post. This is a very basic example. You can download and customize if further for your requirement if you want. I am trying to explain in few words how this control works and what are the different ways in which you can customize it further. File: PreferredTimeControl.xaml, in this file I have just hardcoded the controls and their positions which you can see in the screenshot above. In this example, to change the start day of the week and time, you will have to go and change the design in XAML file, its not controlled by your properties or implementation classes. You can also customize it to change the start day of the week, Language, Display format, styles, etc, etc. File: PreferredTimeControl.xaml.cs, In this control using the code below, first I am taking all the checkbox from my form and store it in the Global Variable, which I can use across my page. List<CheckBox> checkBoxList; #region Constructor public PreferredTimeControl() { InitializeComponent(); GetCheckboxes();//Keep all the checkbox in List in the Load itself } #endregion #region Helper Methods private List<CheckBox> GetCheckboxes() { //Get all the CheckBoxes in the Form checkBoxList = new List<CheckBox>(); foreach (UIElement element in LayoutRoot.Children) { if (element.GetType().ToString() == "System.Windows.Controls.CheckBox") { checkBoxList.Add(element as CheckBox); } } return checkBoxList; } Then I am exposing the two methods which you can use in the container form to get and set the values in this controls. /// <summary> /// Set the Availability on the Form, with the Provided Timings /// </summary> /// <param name="selectedTimings">Provided timings comes from the DB in the form 11,12,13....37 /// Where 11 refers to Monday Morning, 12 Tuesday Morning, etc /// Here 1, 2, 3 is for Morning, Afternoon and Evening respectively, and for weekdays /// 1,2,3,4,5,6,7 where 1 is for Monday, Tuesday, Wednesday, Thrusday, Friday, Saturday and Sunday respectively /// So if we want Monday Morning, we can can denote it as 11, similarly for Saturday Evening we can write 36, etc /// </param> public void SetAvailibility(string selectedTimings) { foreach (CheckBox chk in checkBoxList) { chk.IsChecked = false; } if (!String.IsNullOrEmpty(selectedTimings)) { string[] selectedString = selectedTimings.Split(','); foreach (string selected in selectedString) { foreach (CheckBox chk in checkBoxList) { if (chk.Tag.ToString() == selected) { chk.IsChecked = true; } } } } } /// <summary> /// Gets the Availibility from the selected checkboxes /// </summary> /// <returns>String in the format of 11,12,13...41,42...31,32...37</returns> public string GetAvailibility() { string selectedText = string.Empty; foreach (CheckBox chk in GetCheckboxes()) { if (chk.IsChecked == true) { selectedText = chk.Tag.ToString() + "," + selectedText; } } return selectedText; }   In my example I am using the matrix format for Day and Time, for example Monday=1, Tuesday=2, Wednesday=3, Thursday = 4, Friday = 5, Saturday = 6, Sunday=7. And Morning = 1, Afternoon =2, Evening = 3. So if I want to represent Morning-Monday I will have to represent it as 11, Afternoon-Tuesday as 22, Morning-Wednesday as 13, etc. And in the other way to set the values in the control I am passing the values in the control in the same format as preferredTimeControl.SetAvailibility("11,12,13,16,23,22"); So this will set the checkbox value for Morning-Monday, Morning-Tuesday, Morning-Wednesday, Morning-Saturday, Afternoon of Tuesday and Afternoon of Wednesday. To implement this control, first I have to import this control in xmlns namespace as xmlns:controls="clr-namespace:PreferredTimeControlApp" and finally put in your page wherever you want, <Grid x:Name="LayoutRoot" Style="{StaticResource LayoutRootGridStyle}"> <Border x:Name="ContentBorder" Style="{StaticResource ContentBorderStyle}"> <controls:PreferredTimeControl x:Name="preferredTimeControl"></controls:PreferredTimeControl> </Border> </Grid> And in the code behind you can just include this code: private void InitializeControl() { preferredTimeControl.SetAvailibility("11,12,13,16,23,22"); } And you are ready to go. For more details you can refer to my code attached. I know there can be even simpler and better way to do this. Let me know if any other ideas. Sorry, Guys Still I have used Silverlight 3 and VS2008, as from the system I am uploading this is still not upgraded, but still you can use the same code with Silverlight 4 and VS2010 without any changes. May be just it will ask you to upgrade your project which will take care of rest. Download Source Code.   Thanks ~Brij

    Read the article

  • SQL SERVER – CSVExpress and Quick Data Load

    - by pinaldave
    One of the newest ETL tools is CSVexpress.com.  This is a program that can quickly load any CSV file into ODBC compliant databases uses data integration.  For those of you familiar with databases and how they operate, the question that comes to mind might be what use this program will have in your life. I have written earlier article on this subject over here SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress. You might know that RDBMS have automatic support for loading CSV files into tables – but it is not quite as easy as one click of a button.  First of all, most databases have a command line interface and you need the file and configuration script in order to load up.  You also need to know enough to write the script – which for novices can be extremely daunting.  On top of all this, if you work with more than one type of RDBMS, you need to know the ins and outs of uploading and writing script for more than one program. So you might begin to see how useful CSVexpress.com might be!  There are many other tools that enable uploading files to a database.  They can be very fancy – some can generate configuration files automatically, others load the data directly.  Again, novices will be able to tell you why these aren’t the most useful programs in the world.  You see, these programs were created with SQL in mind, not for uploading data.  If you don’t have large amounts of data to upload, getting the configurations right can be a long process and you will have to check the code that is generated yourself.  Not exactly “easy to use” for novices. That makes CSVexpress.com one of the best new tools available for everyone – but especially people who don’t want to learn a lot of new material all at once.  CSVexpress has an easy to navigate graphical user interface and no scripting or coding is required.  There are built-in constraints and data validations, and you can configure transforms and reject records right there on the screen.  But the best thing of all – it’s free! That’s right, you can download CSVexpress for free from www.csvexpress.com and start easily uploading and configuring riles almost immediately.  If you’re currently happy with your method of data configuration, keep up with the good work.  For the rest of us, there’s CSVexpress.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQL SERVER – Creating All New Database with Full Recovery Model

    - by pinaldave
    Sometimes, complex problems have very simple solutions. Let us see the following email which I received recently. “Hi Pinal, In our system when we create new database, by default, they are all created with the Simple Recovery Model. We have to manually change the recovery model after we create the database. We used the following simple T-SQL code: CREATE DATABASE dbname. We are very frustrated with this situation. We want all our databases to have the Full Recovery Model option by default. We are considering the following methods; please suggest the most efficient one among them. 1) Creating a Policy; when it is violated, the database model can be fixed 2) Triggers at Server Level 3) Automated Job which goes through all the databases and checks their recovery model; if the DBA has not changed the model, then the job will list the Databases and change their recovery model Also, we have a situation where we need a database in the Simple Recovery Model as well – how to white list them? Please suggest the best method.” Indeed, an interesting email! The answer to their question, i.e., which is the best method to fit their needs (white list, default, etc)? It will be NONE of the above. Here is the solution in one line and also the easiest way: Just go to your Model database: Path in SSMS >> Databases > System Databases >> model >> Right Click Properties >> Options >> Recovery Model - Select Full from dropdown. Every newly created database takes its base template from the Model Database. If you create a custom SP in the Model Database, when you create a new database, it will automatically exist in that database. Any database that was already created before making changes in the Model Database will not be affected at all. Creating Policy is also a good method, and I will blog about this in a separate blog post, but looking at current specifications of the reader, I think the Model Database should be modified to have a Full Recovery Option. While writing this blog post, I remembered my another blog post where the model database log file was growing drastically even though there were no transactions SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big. NOTE: Please do not touch the Model Database unnecessary. It is a strict “No.” If you want to create an object that you need in all the databases, then instead of creating it in model database, I suggest that you create a new database called maintenance and create the object there. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Presented Soft Skill Session on Presentation Skills at SQL Bangalore on May 3, 2014

    - by Pinal Dave
    I have presented on various database technologies for almost 10 years now. SQL, Database and NoSQL have been part of my life. Earlier this month, I had the opportunity to present on the topic Performing an Effective Presentation. I must say it was blast to prepare as well as present this session. This event was part of the SQL Bangalore community. If you are in Bangalore, you must be part of this group. SQL Bangalore is a wonderful community and we always have a great response when we present on technology. It is SQL User Group and we discuss everything SQL there. This month we had SQL Server 2014 theme and we had a community launch of SQL Server. We have the best of the best speakers presenting on SQL Server 2014 technology. The event had amazing speakers and each of them did justice to the subject. You can read about this over here. In this session I told a story from my life. I talked about who inspired me and how I learned to speak in public. I told stories about two legends  who have inspired me. There is no video recording of this session. If you want to get resources from this session, please sign up my newsletter at http://bit.ly/sqllearn. Well, I had a great time at this event. We had over 250 people showed up at this event and had a grand  time together. I personally enjoyed a session of Amit Benerjee, Balmukund Lakhani and Vinod Kumar. Ken and Surabh also entertained the audience. Overall, this was a grand event and if you were in Bangalore and did not make it to this event. You did miss out on a few things. Here are a few photos of this event. SQL Bangalore UG Nupur, Chandra, Shaivi, Balmukund, Amit, Vinod [captions This] SQL Bangalore UG Audience Pinal Dave presenting at SQL UG in Bangalore Here are few of the slides from this presentation: Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >