Search Results

Search found 18001 results on 721 pages for 'difficult people'.

Page 207/721 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • How to Switch Chrome’s Default Search to International Google

    - by Erez Zukerman
    Google Chrome’s default search engine is Google. This makes perfect sense; the only problem is that it uses localized Google – for example, Google France or Google Israel. This impacts the interface language, and sometimes even the text orientation. Here’s how you can fix this and get “international” Google results with an English interface. First, we need to figure out what search query we’re going to use. Go to Google.com and execute a simple query for a single word – say “cats”. If you get real-time results, hit Enter so that the address bar updates with the query URL. It should look something like this: http://www.google.com/#sclient=psy&hl=en&site=&source=hp&q=cats&aq=f&aqi=g1g-s1g3&aql=&oq=&pbx=1&bav=on.2,or.&fp=369c8973645261b8 If you wish to customize your search further, click Advanced Search. For example, I would like Google to annotate results with the reading level they require, so I can see what’s going to be difficult to read: Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform] Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing

    Read the article

  • SWFObject and IE6 causing hair-pulling agony

    - by Piet
    I recently used SWFObject to display a flash header on a website. I chose SWFObject because: Instead of displaying an annoying ‘Install flash now’ message, it claims to be able to show alternate content. In this case: the original header image. It claims to be compatible with more or less every browser out there. Implementation went fine, until someone tested it on IE6 and got the following error: Internet explorer cannot open the Internet site http://www….. Operation aborted Which basically means that the site just can’t be visited with IE6 (still used a lot in business environments), it even seems as if there’s something wrong with your internet connection. Now, since about 10% of visitors to this site are still using IE6 (why does everyone still use Internet Explorer ???? Do YOU know that these days most people do NOT use Internet Explorer anymore ?) Now after some googling, I found the suggestion to defer loading of the SWFObject.js as follows: <script type="text/javascript" defer=”defer” src=”http://ajax.googleapis.com/ajax/libs/swfobject/2.2/swfobject.js” </script> <script type=”text/javascript” defer=”defer” swfobject.registerObject(”myId”, “9″, “”); </script> What this does according to W3C: When set, this boolean attribute provides a hint to the user agent that the script is not going to generate any document content (e.g., no “document.write” in javascript) and thus, the user agent can continue parsing and rendering. I don’t know exactly why, but: HURRAY! It works now!!! Only… IE6 and IE7 (didn’t try IE8) now gave the following error: Line: 19 Char: 1 Error: ’swfobject’ is undefined Code: 0 URL: http://www… But the flash was still running fine. Still, such an error isn’t clean, especially since almost half of the site’s visitors are using one of these Internet Explorer versions. Now, wanting a quick fix I decided to do the following: <script type="text/javascript" defer="defer" if (typeof(swfobject) != "undefined") swfobject.registerObject("myId", "9", ""); </script> I admit this is a bit of a weird ‘fix’. You’d suspect the flash to stop working on IE6/IE7, which it doesn’t. Not planning on diving into it’s inner bowels, I regard this a ‘mission accomplished’ until someone somewhere posts a better solution (for which I setup some Google alerts). Do you have a better solution? What would be the impact on the webdev economy (or your life) if all browsers were compatible? Addendum Because the above turned out not to work with the new Firefox 3.5.3 (strangely, was OK with 3.5.2 when I tested it) I decided to cut the crap and use the ‘Dynamic Publishing’ way. Ok, so it won’t work for people who have javascript disabled, but who on earth would have flash installed AND javascript disabled? To avoid the IE6 error with the ‘Dynamic Publishing’ way, I call swfobject.embedSWF right after the div that will be replaced with the flash content. Calling swfobject.embedSWF in the <head> would otherwise give me the above error in IE6 again.

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • Interviews Gone Bad.....Now What Do I Do?

    - by david.talamelli
    We have all done it at some stage of our working careers - you know those times when you leave an interview and then you think to yourself "why didn't I ask that question" or "I can't believe I said that" or "how could I have forgotten to say that". It happens to everyone but how you handle things moving forwards could be critical in helping you land that dream job. There is nothing better than seeing that dream job with the dream company that you are looking to work for advertised (or in some cases getting called by the Recruiter to let you know about that job). The role may seem perfect and it could be just what you are looking for and it is with the right company as well. You have sent in your resume and have subsequently had one, two or maybe three interviews for the role. After each step of the process you get a little bit more excited about the role as you start to think about your work day in your new role/company. Then it happens, you get it: you get The Phone Call to inform you that you have not been successful in securing the position that you have invested so much time and effort into. It can be disappointing to hear this news but what you do next is important in potentially keeping that door open for future opportunities with that company. How you handle yourself in this situation is important: if any of you remember the Choose Your Own Adventure Books do you: Tell the Recruiter (maybe get aggressive) they are wrong in their assessment and that you are the right candidate for the role Switch off and say ok thanks and hang up without engaging in any further dialogue Thank the company for their time and enquire if there may be any other opportunities in the future to explore If you chose the first option - the company in question may consider whether or not to look at you for other opportunities. How you handle yourself in the recruitment process could be an indication of how you would deal with clients/colleagues in your role and the impression that you leave a potential employer may be what sticks in their mind when they think of you (eg: isn't that the person who couldn't handle it when we told him he wasn't right for our role). The second option potentially produces a similar outcome. If you rush to get off the phone, the company may come back to you to talk about other roles when they come up, but you also leave open the potential thought with the company you were only interested in that role and therefore not interested in any other opportunities. Why take the risk of the company thinking that and potentially not getting back to you in the future. By picking the third option, you actively engage with the company and keep the dialogue open for future discussions. Ok, so you didn't get the role you interviewed for - you don't know who else the company may have been interviewing - maybe they found someone who was a better fit, or maybe there were too many boxes you didn't tick to step straight into that specific role. Take a deep breath and keep the company engaged. You are fresh in their mind - take advantage of that fact and let them know that while you respect their decision, that you are still interested in the company and would like to be kept in mind for future roles. Ask if it is ok to keep in touch and when they would like to keep in touch, as long as you are interested let them know you are still interested. You do need to balance that though if you come across as too keen or start stalking people - it could equally damage your brand. Companies normally have more than one open role. New roles are created all the time, circumstances change and hiring people is not a static business, it changes course from everyone's best laid initial plans. If you didn't get that initial role you wanted, keep the door open with that company so that when those new roles do come up or when circumstances do change you have already laid the ground to step into those new positions.

    Read the article

  • ASP.NET MVC Framework

    - by Aamir Hasan
     MVC is a design pattern. A reusable "recipe" for constructing your application. Generally, you don't want your user interface code and data access code to be mixed together, it makes changing either one more difficult. By placing data access code into a "Model" object and user interface code into a "View" object, you can use a "Controller" object to act as a go-between, sending messages/calling methods on the view object when the data changes and vice versa. Model-view-controller (MVC) is an architectural pattern used in software engineering. In complex computer applications that present a large amount of data to the user, a developer often wishes to separate data (model) and user interface (view) concerns, so that changes to the user interface will not affect data handling, and that the data can be reorganized without changing the user interface. The model-view-controller solves this problem by decoupling data access and business logic from data presentation and user interaction, by introducing an intermediate component: the controller.Model:    The domain-specific representation of the information that the application operates. Domain logic adds meaning to raw data (e.g., calculating whether today is the user's birthday, or the totals, taxes, and shipping charges for shopping cart items).    Many applications use a persistent storage mechanism (such as a database) to store data. MVC does not specifically mention the data access layer because it is understood to be underneath or encapsulated by the Model.View:    Renders the model into a form suitable for interaction, typically a user interface element. Multiple views can exist for a single model for different purposes.Controller:    Processes and responds to events, typically user actions, and may invoke changes on the model.    

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Inside Red Gate - Exercises in Leanness

    - by simonc
    There's a new movement rumbling around Red Gate Towers - the Lean Startup. At its core is the idea that you don't have to be in a company with single-digit employees to be an entrepreneur; you simply have to (being blunt) not know what you should be doing. Specifically, you accept that you don't know everything you need to know in order to create a useful, successful & profitable product. This is something that Red Gate has had problems with in the past; we've created products that weren't aimed at the correct market, or didn't solve the problem the user had (although they solved the problem we thought the users had, or the problem the users thought they had). As a result, these products weren't as successful as they could have been. The ideas at the core of the Lean Startup help to combat this tendency to build large, well-engineered products that solve the wrong problem. You need to actually test your hypotheses about what the users and the market needs, rather than just running a project based on those untested assumptions. Furthermore, these tests need to be done as fast as possible (on the order of a week) so that, if necessary, you can change the direction of the project without wasting effort going down a dead end. Over time, as more tests are done and more hypotheses are confirmed or refuted, the project moves towards something that solves users' actual problems. However, re-aligning the development teams that operate within Red Gate along these lines does itself have some issues; we've got very good at doing large, monolithic releases, with a feature set decided well in advance. Currently it takes about 2 weeks to do install & release testing before a release; this is clearly not practicable for a team doing weekly, or even daily releases. There's also many infrastructure issues to be solved; in our source control, build system, release mechanism, support pages & documentation, licensing system, update system, and download pages. All these need modifications to allow the fast releases necessary for each experiment. Not only do we have to change our infrastructure, we have to change our mindset. Doing daily releases means each release won't get nearly as much testing as 'standard' releases. As a team, we have to be prepared that there will be releases that have bugs and issues with them; not only do we have to be prepared to change direction with every experiment we do, but we have to be ready to fix any bugs that are reported very quickly as well. The SmartAssembly team is spearheading this move towards leanness within the company, using Feature Usage Reporting (FUR). We think this is a cracking feature that will really help developers learn how people use their products, but we need to confirm this hypothesis. So, over the next few weeks, we'll be running a variety of experiments on SmartAssembly to either confirm or refute our hypotheses concerning how people use SmartAssembly and apply FUR to their own products. In the rest of this series, I'll be documenting how the experiments we perform get on, and our experiences with applying the Lean Startup model to a mature product like SmartAssembly. Cross posted from Simple Talk.

    Read the article

  • Inside Red Gate - Exercises in Leanness

    - by Simon Cooper
    There's a new movement rumbling around Red Gate Towers - the Lean Startup. At its core is the idea that you don't have to be in a company with single-digit employees to be an entrepreneur; you simply have to (being blunt) not know what you should be doing. Specifically, you accept that you don't know everything you need to know in order to create a useful, successful & profitable product. This is something that Red Gate has had problems with in the past; we've created products that weren't aimed at the correct market, or didn't solve the problem the user had (although they solved the problem we thought the users had, or the problem the users thought they had). As a result, these products weren't as successful as they could have been. The ideas at the core of the Lean Startup help to combat this tendency to build large, well-engineered products that solve the wrong problem. You need to actually test your hypotheses about what the users and the market needs, rather than just running a project based on those untested assumptions. Furthermore, these tests need to be done as fast as possible (on the order of a week) so that, if necessary, you can change the direction of the project without wasting effort going down a dead end. Over time, as more tests are done and more hypotheses are confirmed or refuted, the project moves towards something that solves users' actual problems. However, re-aligning the development teams that operate within Red Gate along these lines does itself have some issues; we've got very good at doing large, monolithic releases, with a feature set decided well in advance. Currently it takes about 2 weeks to do install & release testing before a release; this is clearly not practicable for a team doing weekly, or even daily releases. There's also many infrastructure issues to be solved; in our source control, build system, release mechanism, support pages & documentation, licensing system, update system, and download pages. All these need modifications to allow the fast releases necessary for each experiment. Not only do we have to change our infrastructure, we have to change our mindset. Doing daily releases means each release won't get nearly as much testing as 'standard' releases. As a team, we have to be prepared that there will be releases that have bugs and issues with them; not only do we have to be prepared to change direction with every experiment we do, but we have to be ready to fix any bugs that are reported very quickly as well. The SmartAssembly team is spearheading this move towards leanness within the company, using Feature Usage Reporting (FUR). We think this is a cracking feature that will really help developers learn how people use their products, but we need to confirm this hypothesis. So, over the next few weeks, we'll be running a variety of experiments on SmartAssembly to either confirm or refute our hypotheses concerning how people use SmartAssembly and apply FUR to their own products. In the rest of this series, I'll be documenting how the experiments we perform get on, and our experiences with applying the Lean Startup model to a mature product like SmartAssembly.

    Read the article

  • You Are Hiring But Do Candidate&rsquo;s Want to Work For You

    - by david.talamelli
    So here you are – it has happened, you are now interviewing for that position that you have either applied for or maybe were called about. Whether you are an “active” candidate looking for a job or a “passive” candidate who was contacted about the opportunity, it doesn’t matter now. Regardless of the circumstances of how you got to the interview stage, how you and your new potential manager connect with each other at interview will play a part in whether you are successful in landing that job. The best manager/employee relationships I think tend to be the ones where both the manager and employee have a common goal that they are both working towards and they work together in unison to achieve these goals. Candidates – when you are interviewing for a role, remember that an interview is a two way process. An interview shouldn’t be just a case of a company interviewing you to see if you are a good fit for a certain role. Don’t forget in an interview process it is equally important that you take the opportunity to similarly interview the company to see if that role/company are the right place for you to move to as the next step in your career. I think an interview should not only be a chance for a Hiring Manager to get to better know a candidate and asses his capability and cultural fit for a team/company but it should also be a chance for the candidate to similarly assess a company or manager about whether they are someone that they want to work with. Managers – I know Recruiters have been talking about the “war for talent” since before many of you were managers, but there is no denying it – it exists. You are not only competing with other companies for talented individuals but you are also competing with the existing companies that those talented individuals are working at. Companies are not going to let the people they have identified as superstars resign without a fight (this is the classic Counter Offer scenario which may be another blog post in itself). So how do we get these great people – their current employer will do all they can to keep them, everyone else wants them – does this mean all hope is lost? No, absolutely not. The same reasons that have always existed on why candidates are interested in other opportunities is still there: it could be that someone is looking for career advancement, or they want the chance to work with new technology or maybe you have an opportunity that is exactly what that person is looking to do. As a Hiring Manager don’t just conduct your interviews in question/answer mode. You should talk to that individual to work out what it is they are looking for and you can then relate how your role addresses that. It is potentially going to be the two of you working together so you two are the ones who have to be most comfortable with each other. Don’t oversell the role – set realistic expectations of what that candidate can expect working in your team – give them the good, the bad and the ugly so they can make an informed decision. Manager’s think back to when you last were looking for a job and put yourself in the candidate’s shoes. When you were looking for a job, what was it that you wanted to know about Oracle, or what was it that you wanted more information about. There are some great Business Leaders that work here at Oracle – if you are one of them it is likely that you already are doing all these things anyway. The good news for you is that you are also likely raising yourself head and shoulders above what many interviewers do – that in itself gives you a competitive advantage in this ‘war for talent’ but as a great Business Leader you already know that

    Read the article

  • Introducing Kiddo: A Ruby DSL for building simple WPF and Silverlight applications

    - by fdumlao
    Read the original article here... As a long time Ruby lover and deep rooted .NET supporter, I was probably more psyched than anyone I knew when IronRuby 1.0 was finally released. I immediately grabbed and started building some apps with it to see where the boundaries were going to lie between IronRuby and ruby.exe, and so far I've been pleasantly surprised by how many things just work as I'd expect. I then started to try out some of my favorite libs that I was sure would not work with IronRuby, and I wasn't surprised at all when _why's amazing Shoes library didn't work. Being somewhat familiar with Shoes (it's a great DSL for building simple UIs in Ruby) I felt it wouldn't be that difficult to port it over and as it turned out, someone else had already started the work. As cool as this was, I was never quite satisfied with good 'ol shoes. While it was quite complete, it lacked simple extensibility points, and although easy, it wasn't quite "kid friendly". At the same time on the .NET side of the fence, IronRuby could easily compile XAML to create WPF and Silverlight UIs, but trying to do it declaratively in plain Ruby was no fun at all. And so, the Shoes-inspired, WPF/Silverlight GUI DSL was born. (and it lives here: http://bitbucket.org/fdumlao/kiddo/src) Introducing Kiddo Tell you what. Let's start with a quick code example first. We'll build a useful app that we can use to quickly reverse strings whenever we need it. Read the complete article here...

    Read the article

  • ASP.NET Meta Keywords and Description

    - by Ben Griswold
    Some of the ASP.NET 4 improvements around SEO are neat.  The ASP.NET 4 Page.MetaKeywords and Page.MetaDescription properties, for example, are a welcomed change.  There’s nothing earth-shattering going on here – you can now set these meta tags via your Master page’s code behind rather than relying on updates to your markup alone.  It isn’t difficult to manage meta keywords and descriptions without these ASP.NET 4 properties but I still appreciate the attention SEO is getting.  It’s nice to get gentle reminder via new coding features that some of the more subtle aspects of one’s application deserve thought and attention too.  For the record, this is how I currently manage my meta: <meta name="keywords"     content="<%= Html.Encode(ConfigurationManager.AppSettings["Meta.Keywords"]) %>" /> <meta name="description"     content="<%= Html.Encode(ConfigurationManager.AppSettings["Meta.Description"]) %>" /> All Master pages assume the same keywords and description values as defined by the application settings.  Nothing fancy. Nothing dynamic. But it’s manageable.  It works, but I’m looking forward to the new way in ASP.NET 4.

    Read the article

  • SQL SERVER – Understanding XML – Contest Win Joes 2 Pros Combo (USD 198) – Day 5 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Is following XML a well formed XML Document? <?xml version=”1.0″?> <address> <firstname>Pinal</firstname> <lastname>Dave</lastname> <title>Founder</title> <company>SQLAuthority.com</company> </address> a) Yes b) No c) I do not know Query Hints: BIG HINT POST A common observation by people seeing an XML file for the first time is that it looks like just a bunch of data inside a text file. XML files are text-based documents, which makes them easy to read.  All of the data is literally spelled out in the document and relies on a just a few characters (<, >, =) to convey relationships and structure of the data.  XML files can be used by any commonly available text editor, like Notepad. Much like a book’s Table of Contents, your first glance at well-formed XML will tell you the subject matter of the data and its general structure. Hints appearing within the data help you to quickly identify the main theme (similar to book’s subject), its headers (similar to chapter titles or sections of a book), data elements (similar to a book’s characters or chief topics), and so forth. We’ll learn to recognize and use the structural “hints,” which are XML’s markup components (e.g., XML tags, root elements). The XML Raw and Auto modes are great for displaying data as all attributes or all elements – but not both at once. If you want your XML stream to have some of its data shown in attributes and some shown as elements, then you can use the XML Path mode. If you are using an XML Path stream, then by default all values will be shown as elements. However, it is possible to pick one or more elements to be shown with an attribute(s) as well. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 5. SQL Joes 2 Pros Development Series – OpenXML Options SQL Joes 2 Pros Development Series – Preparing XML in Memory SQL Joes 2 Pros Development Series – Shredding XML SQL Joes 2 Pros Development Series – Using Root With Auto XML Mode SQL Joes 2 Pros Development Series – Using Root With Auto XML Mode SQL Joes 2 Pros Development Series – What is XML? SQL Joes 2 Pros Development Series – What is XML? – 2 Next Step: Answer the Quiz in Contact Form in following format Question - Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to become a Kernel/Systems/Device driver programmer?

    - by accordionfolder
    Hello all! I currently work in a professional capacity as a software engineer working with the Android OS. We work at integrating our platform as a native daemon among other facets of the project. I primarily work in Java developing the SDK and Android applications, but get to help with the platform in C/C++. Anywho, I have a great interest to work professionally developing low level for linux. I am not unhappy in my current position and will hang around as long as the company lets me (as a matter of fact I quite enjoy working there!), but I would like to work my way that direction. I've been working through Linux Kernel Development (Robert Love) and The Linux Programming Interface (Michael Kerrisk) (In addition to strengthening my C skills at every chance I get) and casually browsing Monster and similar sites. The problem I see is, there are no entry level positions. How does one break into this field? Anytime I see "Linux Systems Programmer" or "Linux Device Driver Programmer" they all require at the minimum 5-7 years of relevant experience. They want someone who knows the ropes, not a junior level programmer (I've been working for 7 months now...). So, I'm assuming, that some of you on stackoverflow work in a professional capacity doing just what I would like to do. How did you get there? What platforms did you use to work your way there? Am I going to have a more difficult time because I have my bachelors in CSC as opposed to a computer engineer (where they would experience a bit more embedded, asm, etc)?

    Read the article

  • SQL SERVER – Copy Statistics from One Server to Another Server

    - by pinaldave
    I was recently working on a performance tuning project in Dubai (yeah I was able to see the tallest tower from the window of my work place). I had a very interesting learning experience there. There was a situation where we wanted to receive the schema of original database from a certain client. However, the client was not able to provide us any data due to privacy issues. The schema was very important because without having an access to underlying data, it was a bit difficult to judge the queries etc. For example, without any primary data, all the queries are running in 0 (zero) milliseconds and all were using nested loop as there were no data to be returned. Even though we had CPU offending queries, they were not doing anything without the data in the tables. This was really a challenge as I did not have access to production server data and I could not recreate the scenarios as production without data. Well, I was confused but Ruben from Solid Quality Mentors, Spain taught me new tricks. He suggested that when table schema is generated, we can create the statistics consequently. Here is how we had done that: Once statistics is created along with the schema, without data in the table, all the queries will work as how they will work on production server. This way, without access to the data, we were able to recreate the same scenario as production server on development server. When observed at the script, you will find that the statistics were also generated along with the query. You will find statistics included in WITH STATS_STREAM clause. What a very simple and effective script. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • How to penetrate the QA industry after layoffs, next steps...

    - by Erik
    Briefly, my background is in manual black box testing of websites and applications within the Agile/waterfall context. Over the past four years I was a member of two web development firms' small QA teams dedicated to testing the deployment of websites for national/international non profits, governmental organizations, and for profit business, to name a few: -Brookings Institution -Senate -Tyco Electronics -Blue Cross/Blue Shield -National Geographic -Discover Channel I have a very strong understanding of the: -SDLC -STLC of bugs and website deployment/development -Use Case & Test Case development In March of this year, my last firm downsized and lost my job as a QA tester. I have been networking and doing a very detailed job search, but have had a very difficult time getting my next job within the QA industry, even with my background as a manual black box QA tester in the website development context. My direct question to all of you: What are some ways I can be more competitive and get hired? Options that could get me competitive: Should I go back to school and learn some more 'hard' skills in website development and client side technologies, e.g.: -HTML -CSS -JavaScript Learn programming: -PHP -C# -Ruby -SQL -Python -Perl -?? Get Certified as a QA Tester, there are a countless numbers of programs to become a Certified Tester. Most, if not all jobs, being advertised now require Automated Testing experience, in: -QTP -Loadrunner -Selenium -ETC. Should I learn, Automated testing skills, via a paid course, or teach myself? --Learn scripting languages to understand the automated testing process better? Become a Certified "Project Management Professional" (PMP) to prove to hiring managers that I 'get' the project development life cycle? At the end of the day I need to be competitive and get hired as a QA tester and want to build upon my skills within the QA web development field. How should I do this, without reinventing the wheel? Any help in this regard would be fabulous. Thanks! .erik

    Read the article

  • Teeing Off With Chris Leone at OpenWorld 2012

    - by Kathryn Perry
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Chris Leone, Senior Vice President, Oracle Applications Development Monday morning in downtown San Francisco - lots of sunshine, plenty of traffic, and sidewalks chocked full of people with fresh faces and blister free feet. Let the week of Oracle OpenWorld begin! For a great Applications start, Chris Leone packed the house with his Fusion Applications overview session - he covered strategy, scope, roadmaps, and customer successes. Fusion Apps, the world's best SaaS suite, is built on 100 percent standards. Chris talked about its information driven user experience, its innovative design, and the choice of deployment. People can run Fusion in the cloud, in a managed / hosted environment, or on premise -- or they can use a combination of these three models. About seventy percent of our customers go with SaaS. Release 5 of Fusion Apps will become available soon. The cadence of releases will be three times a year. The key drivers are to accelerate business success (no rip and replace) and to simplify business processes. Chris told the audience that organic Fusion is the centerpiece of our cloud solutions, rounded out with acquired offerings such as Taleo Recruiting and RightNow Customer Service. From the cloud solutions, customers can expect real time and predictive BI, social capabilities, choice of deployment, and more productivity because of a next generation UX called FUSE. Chris's demo showed a super easy, new UI that touts self service navigation. We'll blog about FUSE in the very near future. Chris said the next 365 days of Fusion Apps would include more localization, more industries, more power, more mobile, and more configurability. The audience was challenged to think hard about how Fusion could be part of their three-to-five year plans. Chris set up a great opportunity for you to follow up with your customers as they explore the possibilities.

    Read the article

  • Don&rsquo;t Kill the Password

    - by Anthony Trudeau
    A week ago Mr. Honan from Wired.com penned an article on security he titled “Kill the Password: Why a String of Characters Can’t Protect Us Anymore.” He asserts that the password is not effective and a new solution is needed. Unfortunately, Mr. Honan was a victim of hacking. As a result he has a victim’s vendetta. His conclusion is ill conceived even though there are smatterings of truth and good advice. The password is a security barrier much like a lock on your door. In of itself it’s not guaranteeing protection. You can have a good password akin to a steel reinforced door with the best lock money can buy, or you can have a poor password like “password” which is like a sliding lock like on a bathroom stall. But, just like in the real world a lock isn’t always enough. You can have a lock, security system, video cameras, guard dogs, and even armed security guards; but none of that guarantees your protection. Even top secret government agencies can be breached by someone who is just that good (as dramatized in movies like Mission Impossible). And that’s the crux of it. There are real hackers out there that are that good. Killer coding ninja monkeys do exist! We still have locks on our doors, because they still serve their role. Passwords are no different. Security doesn’t end with the password. Most people would agree that stuffing your mattress with your life savings isn’t a good idea even if you have the best locks and security system. Most people agree its safest to have the money in a bank. Essentially this is compartmentalization. Compartmentalization extends to the online world as well. You’re at risk if your online banking accounts are linked to the same account as your social networks. This is especially true if you’re lackadaisical about linking those social networks to outside sources including apps. The object here is to minimize the damage that can be done. An attacker should not be able to get into your bank account, because they breached your Twitter account. It’s time to prioritize once you’ve compartmentalized. This simply means deciding how much security you want for the different compartments which I’ll call security zones. Social networking applications like Facebook provide a lot of security features. However, security features are almost always a compromise with privacy and convenience. It’s similar to an engineering adage, but in this case it’s security, convenience, and privacy – pick two. For example, you might use a safe instead of bank to store your money, because the convenience of having your money closer or the privacy of not having the bank records is more important than the added security. The following are lists of security do’s and don’ts (these aren’t meant to be exhaustive and each could be an article in of themselves): Security Do’s: Use strong passwords based on a phrase Use encryption whenever you can (e.g. HTTPS in Facebook) Use a firewall (and learn to use it properly) Configure security on your router (including port blocking) Keep your operating system patched Make routine backups of important files Realize that if you’re not paying for it, you’re the product Security Don’ts Link accounts if at all possible Reuse passwords across your security zones Use real answers for security questions (e.g. mother’s maiden name) Trust anything you download Ignore message boxes shown by your system or browser Forget to test your backups Share your primary email indiscriminately Only you can decide your comfort level between convenience, privacy, and security. Attackers are going to find exploits in software. Software is complex and depends on other software. The exploits are the responsibility of the software company. But your security is always your responsibility. Complete security is an illusion. But, there is plenty you can do to minimize the risk online just like you do in the physical world. Be safe and enjoy what the Internet has to offer. I expect passwords to be necessary just as long as locks.

    Read the article

  • WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found

    - by John Haigh
    WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found After finding these steps online from http://dattard.blogspot.com/2008/11/active-directory-forms-based.html in order to setup Active Directory Forms Based Authentication I was all set to complete this task, except for one problem. These steps are missing one very important vital step in order for FBA to work with Active Directory. A supplement to step 3 before granting access in step 5 through the people picker. You need to specify the Active Directory Provider Name to the people picker, otherwise you will not be able specify users through the Policy for Web Application. <PeoplePickerWildcards>       <clear />          <add key="ADMembershipProvider" value="%" />     </PeoplePickerWildcards> Recently we needed to use Forms Based Authentication with Active Directory from an Extranet. This is how we got it to work. 1. Extend the Web Application Instead of tweaking the internal web app, Extend the web application you want to expose to the Extranet, giving it the required host headers etc. 2. Configure SharePoint Central Admin to use FBA for the "new" Web Applications Login to SharePoint Central Admin Go to Application Management / Application Security / Authentication Providers and Change the Web Application to the one which needs to be configured for Forms Based Authentication Click zone / default, change authentication type to forms and enter ActiveDirectoryMemebershipProvider under membership provider name ( for example , "ADMembershipProvider") and save this change 3. Update the web.config of SharePoint Central admin site under configuration node <connectionStrings> <add name="ADConnectionString" connectionString="LDAP://DynamicsAX.local/CN=Users,DC=DynamicsAX,DC=local /> </connectionStrings> under system.web node <membership defaultProvider="ADMembershipProvider"> <providers> <add name="ADMembershipProvider" type="System.Web.Security.ActiveDirectoryMembershipProvider,System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ADConnectionString" connectionUsername="xxx" connectionPassword="yyy" enableSearchMethods="true" attributeMapUsername="sAMAccountName"/> </providers> </membership> 4.Update the web.config of SharePoint Web application Repeat step 3 for the web.config of the SharePoint webapplication to be configured for Forms Based Authentication Change the authentication in web.config to <authentication mode="Forms"> <forms loginUrl="/_layouts/login.aspx"></forms> </authentication> 5. Grant Access on the extended Web Application Your extranet web application is now configured to use FBA. However, until users, who will be accessing the site via FBA, are given permissions for the site, it will be inaccessible to them. To get started, open your browser and navigate to your farm’s Central Administration site. Click on Application Management and then click on Policy for Web Application. Make sure that you are working on the extranet web application. Do the following steps: Click on Add Users. In the Zones drop down, select the appropriate Extranet zone. IMPORTANT: If you select the incorrect zone, you may not be able to resolve user names. Hence, the zone you select must match the zone of the web application that is configured to use FBA. Click the Next button. In the Users edit box, type the name of the FBA user whom you wish to have full control for the site. Click the Resolve link next to the Users edit box. If the web application's FBA information has been configured correctly, the name will resolve and become underlined. Check the Full Control checkbox. Click the Finish button.

    Read the article

  • Book Review: Programming Windows Identity Foundation

    - by DigiMortal
    Programming Windows Identity Foundation by Vittorio Bertocci is right now the only serious book about Windows Identity Foundation available. I started using Windows Identity Foundation when I made my first experiments on Windows Azure AppFabric Access Control Service. I wanted to generalize the way how people authenticate theirselves to my systems and AppFabric ACS seemed to me like good point where to start. My first steps trying to get things work opened the door to whole new authentication world for me. As I went through different blog postings and articles to get more information I discovered that the thing I am trying to use is the one I am looking for. As best security API for .NET was found I wanted to know more about it and this is how I found Programming Windows Identity Foundation. What’s inside? Programming WIF focuses on architecture, design and implementation of WIF. I think Vittorio is very good at teaching people because you find no too complex topics from the book. You learn more and more as you read and as a good thing you will find that you can also try out your new knowledge on WIF immediately. After giving good overview about WIF author moves on and introduces how to use WIF in ASP.NET applications. You will get complete picture how WIF integrates to ASP.NET request processing pipeline and how you can control the process by yourself. There are two chapters about ASP.NET. First one is more like introduction and the second one goes deeper and deeper until you have very good idea about how to use ASP.NET and WIF together, what issues you may face and how you can configure and extend WIF. Other two chapters cover using WIF with Windows Communication Foundation (WCF) band   Windows Azure. WCF chapter expects that you know WCF very well. This is not introductory chapter for beginners, this is heavy reading if you are not familiar with WCF. The chapter about Windows Azure describes how to use WIF in cloud applications. Last chapter talks about some future developments of WIF and describer some problems and their solutions. Most interesting part of this chapter is section about Silverlight. Who should read this book? Programming WIF is targeted to developers. It does not matter if you are beginner or old bullet-proof professional – every developer should be able to be read this book with no difficulties. I don’t recommend this book to administrators and project managers because they find almost nothing that is related to their work. I strongly recommend this book to all developers who are interested in modern authentication methods on Microsoft platform. The book is written so well that I almost forgot all things around me when I was reading the book. All additional tools you need are free. There is also Azure AppFabric ACS test version available and you can try it out for free. Table of contents Foreword Acknowledgments Introduction Part I Windows Identity Foundation for Everybody 1 Claims-Based Identity 2 Core ASP.NET Programming Part II Windows Identity Foundation for Identity Developers 3 WIF Processing Pipeline in ASP.NET 4 Advanced ASP.NET Programming 5 WIF and WCF 6 WIF and Windows Azure 7 The Road Ahead Index

    Read the article

  • So, "Are Design Patterns Missing Language Features"?

    - by Eduard Florinescu
    I saw the answer to this question: How does thinking on design patterns and OOP practices change in dynamic and weakly-typed languages? There it is a link to an article with an outspoken title: Are Design Patterns Missing Language Features. But where you can get snippets that seem very objective and factual and that can be verified from experience like: PaulGraham said "Peter Norvig found that 16 of the 23 patterns in Design Patterns were 'invisible or simpler' in Lisp." and a thing that confirms what I recently seen with people trying to simulate classes in javascript: Of course, nobody ever speaks of the "function" pattern, or the "class" pattern, or numerous other things that we take for granted because most languages provide them as built-in features. OTOH, programmers in a purely PrototypeOrientedLanguage? might well find it convenient to simulate classes with prototypes... I am taking into consideration also that design patterns are a communcation tool and because even with my limited experience participating in building applications I can see as an anti-pattern(ineffective and/or counterproductive) for example forcing a small PHP team to learn GoF patterns for small to medium intranet app, I am aware that scale, scope and purpose can determine what is effective and/or productive. I saw small commercial applications that mixed functional with OOP and still be maintainable, and I don't know if many would need for example in python to write a singleton but for me a simple module does the thing. patterns So are there studies or hands on experience shared that takes into consideration, all this, scale and scope of project, dynamics and size of the team, languages and technologies, so that you don't feel that a (difficult for some)design pattern is there just because there isn't a simpler way to do it or that it cannot be done by a language feature?

    Read the article

  • Hello World - My Name is Christian Finn and I'm a WebCenter Evangelist

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Good Morning World! I'd like to introduce a new member of the Oracle WebCenter Team, Christian Finn. We decided to let him do his own intros today. Look for his guest posts next week and he'll be a frequent contributor to WebCenter blog and voice of the community. Hello (Oracle) World! Hi everyone, my name is Christian Finn. It’s a coder’s tradition to have “hello world” be the first output from a new program or in a new language. While I have left my coding days far behind, it still seems fitting to start my new role here at Oracle by saying hello to all of you—our customers, partners and my colleagues. So by way of introduction, a little background about me. I am the new senior director for evangelism on the WebCenter product management team. Not only am I new to Oracle, but the evangelism team is also brand new. Our mission is to raise the profile of Oracle in all of the markets/conversations in which WebCenter competes—social business, collaboration, portals, Internet sites, and customer/audience engagement. This is all pretty familiar turf for me because, as some of you may know, until recently I was the director of product management at Microsoft for Microsoft SharePoint Server and several other SharePoint products. And prior to that, I held management roles at Microsoft in marketing, channels, learning, and enterprise sales. Before Microsoft, I got my start in the industry as a software trainer and Lotus Notes consultant. I am incredibly excited to be joining Oracle at this time because of the tremendous opportunity that lies ahead to improve how people and businesses work. Of all the vendors offering a vision for social business, Oracle is unique in having best of breed strength in market (or coming soon) in all three critical areas: customer experience management; the middleware and back-end applications that run your business; and in the social, collaboration, and content technologies that are the connective tissue between them. Everyone else can offer one or two of the above, but not all three unified together. So it is a great time to come board and there’s a fantastic team of people hard at work on building great products for you. In the coming weeks and months you’ll be hearing much more from us. For now, we’ll kick things off with some blog posts here on the WebCenter blog. Enjoy the reads and please share your thoughts with me over Twitter on @cfinn.

    Read the article

  • It’s the thought that counts…

    - by Tony Davis
    I recently finished editing a book called Tribal SQL, and it was a fantastic experience. It’s a community-sourced book written by first-timers. Fifteen previously unpublished authors contributed one chapter each, with the seemingly simple remit to write about “what makes them passionate about working with SQL Server, something that all SQL Server DBAs and developers really need to know”. Sure, some of the writing skills were a bit rusty as one would expect from busy people, but the ideas and energy were sheer nectar. Any seasoned editor can deal easily with the problem of fixing the output of untrained writers. We can handle with the occasional technical error too, which is why we have technical reviewers. The editor’s real job is to hone the clarity and flow of ideas, making the author’s knowledge and experience accessible to as many others as possible. What the writer needs to bring, on the other hand, is enthusiasm, attention to detail, common sense, and a sense of the person behind the writing. If any of these are missing, no editor can fix it. We can see these essential characteristics in many of the more seasoned and widely-published writers about SQL. To illustrate what I mean by enthusiasm, or passion, take a look at the work of Laerte Junior or Fabiano Amorim. Both authors have English as a second language, but their energy, enthusiasm, sheer immersion in a technology and thirst to know more, drives them, with a little editorial help, to produce articles of far more practical value than one can find in the “manuals”. There’s the attention to detail of the likes of Jonathan Kehayias, or Paul Randal. Read their work and one begins to understand the knowledge coupled with incredible rigor, the willingness to bend and test every piece of advice offered to make sure it’s correct, that marks out the very best technical writing. There’s the common sense of someone like Louis Davidson. All writers, including Louis, like to stretch the grey matter of their readers, but some of the most valuable writing is that which takes a complicated idea, or distils years of experience, and expresses it in a way that sounds like simple common sense. There’s personality and humor. Contrary to what you may have been told, they can and do mix well with technical writing, as long as they don’t become a distraction. Read someone like Rodney Landrum, or Phil Factor, for numerous examples of articles that teach hard technical lessons but also make you smile at least twice along the way. Writing well is not easy and it takes a certain bravery to expose your ideas and knowledge for dissection by others, but it doesn’t mean that writing should be the preserve only of those trained in the art, or best left to the MVPs. I believe that Tribal SQL is testament to the fact that if you have passion for what you do, and really know your topic then, with a little editorial help, you can write, and people will learn from what you have to say. You can read a sample chapter, by Mark Rasmussen, in this issue of Simple-Talk and I hope you’ll consider checking out the book (if you needed any further encouragement, it’s also for a good cause, Computers4Africa). Cheers, Tony  

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • AutoFit in PowerPoint: Turn it OFF

    - by Daniel Moth
    Once a feature has shipped, it is very hard to eliminate it from the next release. If I was in charge of the PowerPoint product, I would not hesitate for a second to remove the dreadful AutoFit feature. Fortunately, AutoFit can be turned off on a slide-by-slide basis and, even better, globally: go to the PowerPoint "Options" and under "Proofing" find the "AutoCorrect Options…" button which brings up the dialog where you need to uncheck the last two checkboxes (see the screenshot to the right). AutoFit is the ability for the user to keep hitting the Enter key as they type more and more text into a slide and it magically still fits, by shrinking the space between the lines and then the text font size. It is the root of all slide evil. It encourages people to think of a slide as a Word document (which may be your goal, if you are presenting to execs in Microsoft, but that is a different story). AutoFit is the reason you fall asleep in presentations. AutoFit causes too much text to appear on a slide which by extension causes the following: When the slide appears, the text is so small so it is not readable by everyone in the audience. They dismiss the presenter as someone who does not care for them and then they stop paying attention. If the text is readable, but it is too much (hence the AutoFit feature kicked in when the slide was authored), the audience is busy reading the slide and not paying attention to the presenter. Humans can either listen well or read well at the same time, so when they are done reading they now feel that they missed whatever the speaker was saying. So they "switch off" for the rest of the slide until the next slide kicks in, which is the natural point for them to pick up paying attention again. Every slide ends up with different sized text. The less visual consistency between slides, the more your presentation feels unprofessional. You can do better than dismiss the (subconscious) negative effect a deck with inconsistent slides has on an audience. In contrast, the absence of AutoFit Leads to consistency among all slides in a deck with regards to amount of text and size of said text. Ensures the text is readable by everyone in the audience (presuming the PowerPoint template is designed for the room where the presentation is delivered). Encourages the presenter to create slides with the minimum necessary text to help the audience understand the basic structure, flow, and key points of the presentation. The "meat" of the presentation is delivered verbally by the presenter themselves, which is why they are in the room in the first place. Following on from the previous point, the audience can at a quick glance consume the text on the slide when it appears and then concentrate entirely on the presenter and what they have to say. You could argue that everything above has nothing to do with the AutoFit feature and all to do with the advice to keep slide content short. You would be right, but the on-by-default AutoFit feature is the one that stops most people from seeing and embracing that truth. In other words, the slides are the tool that aids the presenter in delivering their message, instead of the presenter being the tool that advances the slides which hold the message. To get there, embrace terse slides: the first step is to turn off this horrible feature (that was probably introduced due to the misuse of this tool within Microsoft). The next steps are described on my next post. Comments about this post welcome at the original blog.

    Read the article

  • Multiple stores for the same niche

    - by pandronic
    I started developing a new niche of products in my country about 3 years ago. That's when I opened my first store. Everything went fine, until a year ago, when someone I thought was a friend secretly stole my idea and made his own competing store. I was pretty upset when I caught him and decided to make it as difficult as possible for him, so I made another 4 stores, trying to get him as low as possible in the search results. The new sites have similar products (although not 100% identical), slightly different titles, images and prices. They look different and are built on different e-commerce platforms. They are all hosted on the same server, have roughly the same backlinks, use the same Google account for Analytics, have the same support phone numbers etc etc. I wasn't thinking that I'm doing something fishy, so I didn't try to hide anything. Trouble is that those sites, after doing fine for a few months, dropped like bricks in search results, almost to the point that they can't be found at all. At the moment, the only site that ranks relatively well is the original one and a couple of secondary pages with no importance from one of the other sites. How did this happen? Does Google have something against this practice? Did they take action by themselves when they realized that I was trying to monopolize this niche, or did my competitor report me for some kind of webspam? And more importantly, what do I do now? Do I shutdown all but my original site and 301 redirect users to it from the others? Can I report my competitor for engaging in the same practice? (He fought back and now he has 3-4 sites, some of which still rank kind of OKish, also he has no idea about web development, SEO or marketing, he just crudely copies what I do and is slowly but surely starting to do better than me).

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >