Search Results

Search found 39991 results on 1600 pages for 'simple framework'.

Page 329/1600 | < Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >

  • Overload Avoidance

    - by mikef
    A little under a year ago, Matt Simmons wrote a rather reflective article about his terrifying brush with stress-induced ill health. SysAdmins and DBAs have always been prime victims of work-related stress, but I wonder if that predilection is perhaps getting worse, despite the best efforts of Matt and his trusty side-kick, HR. The constant pressure from share-holders and CFOs to 'streamline' the workforce is partially to blame, but the more recent culprit is technology itself. I can't deny that the rise of technologies like virtualization, PowerCLI, PowerShell, and a host of others has been a tremendous boon. As a result, individual IT professionals are now able to handle more and more tasks and manage increasingly large and complex environments. But, without a doubt, this is a two-edged sword; The reward for competence is invariably more work. Unfortunately, SysAdmins play such a pivotal role in modern business that it's easy to see how they can very quickly become swamped in conflicting demands coming from different directions. However, that doesn't justify the ridiculous hours many are asked (or volunteer) to devote to their work. Admirably though their commitment is, it isn't healthy for them, it sets a dangerous expectation, and eventually something will snap. There are times when everyone needs to step up to the plate outside of 'normal' work hours, but that time isn't all the time. Naturally, with all that lovely technology, you can automate more and more of those tricky tasks to keep on top of the workload, but you are still only human. Clever though you may be, there is a very real limit to how far technology can take you. I'm not suggesting that you avoid these technologies, or deliberately aim for mediocrity; I'm just saying that you need to be more than just technically skilled (and Wesley Nonapeptide riffs on and around this topic in his excellent 'Telepathic Robot Drones' blog post). You need to be able to manage expectations, not just Exchange. Specifically, that means your own expectations of what you are capable of, because those come before everyone else's. After all, how can you keep your work-life balance under control, if you're the one setting the bar way too high? Talking to your manager, or discussing issues with your users, is only going to be productive if you have some facts to work with. "Know Thyself" is the first law of managing work overload, and this is obviously a skill which people develop over time; the fact that veteran Sysadmins exist at all is testament to this. I'd just love to know how you get to that point. Personally, I'm using RescueTime to keep myself honest, but I'm open to recommendations for better methods. Do you track your own time, do you have an intuitive sense of what is possible, or do you just rely on someone else to handle that all for you? Cheers, Michael

    Read the article

  • What Counts For a DBA – Decisions

    - by Louis Davidson
    It’s Friday afternoon, and the lead DBA, a very talented guy, is getting ready to head out for two well-earned weeks of vacation, with his family, when this error message pops up in his inbox: Msg 211, Level 23, State 51, Line 1. Possible schema corruption. Run DBCC CHECKCATALOG. His heart sinks. It’s ten…no eight…minutes till it’s time to walk out the door. He glances around at his coworkers, competent to handle many problems, but probably not up to the challenge of fixing possible database corruption. What does he do? After a few agonizing moments of indecision, he clicks shut his laptop. He’ll just wait and see. It was unlikely to come to anything; after all, it did say “possible” schema corruption, not definite. In that moment, his fate was sealed. The start of the solution to the problem (run DBCC CHECKCATALOG) had been right there in the error message. Had he done this, or at least took two of those eight minutes to delegate the task to a coworker, then he wouldn’t have ended up spending two-thirds of an idyllic vacation (for the rest of the family, at least) dealing with a problem that got consistently worse as the weekend progressed until the entire system was down. When I told this story to a friend of mine, an opera fan, he smiled and said it described the basic plotline of almost every opera or ‘Greek Tragedy’ ever written. The particular joy in opera, he told me, isn’t the warbly voiced leading ladies, or the plump middle-aged romantic leads, or even the music. No, what packs the opera houses in Italy is the drama of characters who, by the very nature of their life-experiences and emotional baggage, make all sorts of bad choices when faced with ordinary decisions, and so move inexorably to their fate. The audience is gripped by the spectacle of exotic characters doomed by their inability to see the obvious. I confess, my personal experience with opera is limited to Bugs Bunny in “What’s Opera, Doc?” (Elmer Fudd is a great example of a bad decision maker, if ever one existed), but I was struck by my friend’s analogy. If all the DBA cubicles were a stage, I think we would hear many similarly tragic tales, played out to music: “Error handling? We write our code to never experience errors, so nah…“ “Backups failed today, but it’s okay, we’ll back up tomorrow (we’ll back up tomorrow)“ And similarly, they would leave their audience gasping, not necessarily at the beauty of the music, or poetry of the lyrics, but at the inevitable, grisly fate of the protagonists. If you choose not to use proper error handling, or if you choose to skip a backup because, hey, you haven’t had a server crash in 10 years, then inevitably, in that moment you expected to be enjoying a vacation, or a football game, with your family and friends, you will instead be sitting in front of a computer screen, paying for your poor choices. Tragedies are very much part of IT. Most of a DBA’s day to day work has limited potential to wreak havoc; paperwork, timesheets, random anonymous threats to developers, routine maintenance and whatnot. However, just occasionally, you, as a DBA, will face one of those decisions that really matter, and which has the possibility to greatly affect your future and the future of your user’s data. Make those decisions count, and you’ll avoid the tragic fate of many an operatic hero or villain.

    Read the article

  • Subterranean IL: Exception handler semantics

    - by Simon Cooper
    In my blog posts on fault and filter exception handlers, I said that the same behaviour could be replicated using normal catch blocks. Well, that isn't entirely true... Changing the handler semantics Consider the following: .try { .try { .try { newobj instance void [mscorlib]System.Exception::.ctor() // IL for: // e.Data.Add("DictKey", true) throw } fault { ldstr "1: Fault handler" call void [mscorlib]System.Console::WriteLine(string) endfault } } filter { ldstr "2a: Filter logic" call void [mscorlib]System.Console::WriteLine(string) // IL for: // (bool)((Exception)e).Data["DictKey"] endfilter }{ ldstr "2b: Filter handler" call void [mscorlib]System.Console::WriteLine(string) leave.s Return } } catch object { ldstr "3: Catch handler" call void [mscorlib]System.Console::WriteLine(string) leave.s Return } Return: // rest of method If the filter handler is engaged (true is inserted into the exception dictionary) then the filter handler gets engaged, and the following gets printed to the console: 2a: Filter logic 1: Fault handler 2b: Filter handler and if the filter handler isn't engaged, then the following is printed: 2a:Filter logic 1: Fault handler 3: Catch handler Filter handler execution The filter handler is executed first. Hmm, ok. Well, what happens if we replaced the fault block with the C# equivalent (with the exception dictionary value set to false)? .try { // throw exception } catch object { ldstr "1: Fault handler" call void [mscorlib]System.Console::WriteLine(string) rethrow } we get this: 1: Fault handler 2a: Filter logic 3: Catch handler The fault handler is executed first, instead of the filter block. Eh? This change in behaviour is due to the way the CLR searches for exception handlers. When an exception is thrown, the CLR stops execution of the thread, and searches up the stack for an exception handler that can handle the exception and stop it propagating further - catch or filter handlers. It checks the type clause of catch clauses, and executes the code in filter blocks to see if the filter can handle the exception. When the CLR finds a valid handler, it saves the handler's location, then goes back to where the exception was thrown and executes fault and finally blocks between there and the handler location, discarding stack frames in the process, until it reaches the handler. So? By replacing a fault with a catch, we have changed the semantics of when the filter code is executed; by using a rethrow instruction, we've split up the exception handler search into two - one search to find the first catch, then a second when the rethrow instruction is encountered. This is only really obvious when mixing C# exception handlers with fault or filter handlers, so this doesn't affect code written only in C#. However it could cause some subtle and hard-to-debug effects with object initialization and ordering when using and calling code written in a language that can compile fault and filter handlers.

    Read the article

  • An update process that is even worse than Windows updates

    - by fatherjack
    I'm sorry EA but your game update process stinks. I am not a hardcore gamer but I own a Playstation3 and have been playing Battlefield Bad Company 2 (BFBC2) a bit since I got it for my birthday and there have been two recent updates to the game. Now I like the idea of games getting updates via downloadable content. You can buy a game and if there are changes that are needed (service packs if you will) then they can be distributed over the games console network. Great. Sometimes it fixes problems,...(read more)

    Read the article

  • .Net Reflector 6.5 EAP now available

    - by CliveT
    With the release of CLR 4 being so close, we’ve been working hard on getting the new C# and VB language features implemented inside Reflector. The work isn’t complete yet, but we have some of the features working. Most importantly, there are going to be changes to the Reflector object model, and we though it would be useful for people to see the changes and have an opportunity to comment on them. Before going any further, we should tell you what the EAP contains that’s different from the released version. A number of bugs have been fixed, mainly bugs that were raised via the forum. This is slightly offset by the fact that this EAP hasn’t had a whole lot of testing and there may have been new bugs introduced during the development work we’ve been doing. The C# language writer has been changed to display in and out co- and contra-variance markers on interfaces and delegates, and to display default values for optional parameters in method definitions. We also concisely display values passed by reference into COM calls. However, we do not change callsites to display calls using named parameters; this looks like hard work to get right. The forthcoming version of the C# language introduces dynamic types and dynamic calls. The new version of Reflector should display a dynamic call rather than the generated C#: dynamic target = MyTestObject(); target.Hello("Mum"); We have a few bugs in this area where we are not casting to dynamic when necessary. These have been fixed on a branch and should make their way into the next EAP. To support the dynamic features, we’ve added the types IDynamicMethodReferenceExpression, IDynamicPropertyIndexerExpression, and IDynamicPropertyReferenceExpression to the object model. These types, based on the versions without “Dynamic” in the name, reflect the fact that we don’t have full information about the method that is going to be called, but only have its name (as a string). These interfaces are going to change – in an internal version, they have been extended to include information about which parameter positions use runtime types and which use compile time types. There’s also the interface, IDynamicVariableDeclaration, that can be used to determine if a particular variable is used at dynamic call sites as a target. A couple of these language changes have also been added to the Visual Basic language writer. The new features are exposed only when the optimization level is set to .NET 4. When the level is set this high, the other standard language writers will simply display a message to say that they do not handle such an optimization level. Reflector Pro now has 4.0 as an optional compilation target and we have done some work to get the pdb generation right for these new features. The EAP version of Reflector no longer installs the add-in on startup. The first time you run the EAP, it displays the integration options dialog. You can use the checkboxes to select the versions of Visual Studio into which you want to install the EAP version. Note that you can only have one version of Reflector Pro installed in Visual Studio; if you install into a Visual Studio that has another version installed, the previous version will be removed. Please try it out and send your feedback to the EAP forum.

    Read the article

  • I'm Seeing Red

    - by Grant Fritchey
    Hello World! My move into the world of Red Gate is more and more complete with my shiny, new, red, blog. The goal of this blog is not to compete with, or replace, my blog over at ScaryDBA. Instead, this blog is where I can share things I find about Red Gate products and services. I can talk about the things that we're doing at Red Gate. I can talk about the things I'm doing at Red Gate. In short, this is my Red Gate blog. I'm still the Scary DBA, but over here, I'm painted bright red (and no, I was promised that no pictures were taken of that process). So look for tips and suggestions about Red Gate products, methods to help you do your job better using one of our tools, and anything else I can think of or comment on that supports you and our excellent software.

    Read the article

  • Regional Office Virtualization - Less Can Be More

    The dream of an 'office in a box' has been around for years, but the increasing sophistication of virtualization software has turned the dream into reality. Ben Lye explains the problems and benefits of reducing the amount of physical hardware that were deployed in his organisation's regional offices.

    Read the article

  • SharePoint Web Part Constructor Fires Twice When Adding it to the Page (and has a different security

    - by Damon
    We had some exciting times debugging an interesting issue with SharePoint 2007 Web Parts.  We had some code in staging that had been running just fine for weeks and had not been touched or changed in about the same amount of time.  However, when we tried to move the web part into a different staging environment, the part started throwing a security exception when we tried to add it to a page.  After a bit of debugging, we determined that the web part was throwing the exception while trying to access the SPGroups property on the SharePoint site.  This was pretty strange because we were logged in as an admin and the code was working perfectly fine before.  During the debugging process, however, we found out that the web part constructor was being fired twice.  On one request, the security context did not seem to have everything it needed in order to run.  On the other request, the security context was populated with the user context with the user making the request (like it normally is).  Moving the security code outside of the constructor seems to have fixed the issue. Why the discrepancy between the two staging environments?  Turns out we deployed the part originally, then deployed an update with the security code.  Since the part was never "added" to the page after the code updates were made (we just deployed a new assembly to make the updates), we never saw the problem.  It seems as though the constructor fires twice when you are adding the web part to the page, and when you run the web part from the web part gallery.  My only thought on why this would occur is that SharePoint is instantiating an instance to get some information from it - which is odd because you would think that would happen with reflection without requiring a new object.  Anyway, the work around is to just not put anything security related inside the constructor, or to do a good job accounting for the possibility of the security context not being present if you are adding the item to the page. Technorati Tags: SharePoint,.NET,Microsoft,ASP.NET

    Read the article

  • A Slice of Raspberry Pi

    - by Phil Factor
    Guest editorial for the ITPro/SysAdmin newsletter The Raspberry Pi Foundation has done a superb design job on their new $35 network-enabled Linux computer. This tiny machine, incorporating an ARM processor on a Broadcom BCM2835 multimedia chip, aims to put the fun back into learning computing. The public response has been overwhelmingly positive.Note that aim: "…to put the fun back". Education in Information Technology is in dire straits. It always has been, but seems to have deteriorated further still, even in the face of improved provision of equipment.In many countries, the government controls the curriculum. It predicted a shortage in office-based IT skills, and so geared the ICT curriculum toward mind-numbing training in word-processing and spreadsheet skills. Instead, the shortage has turned out to be in people with an engineering-mindset, who can solve problems with whatever technologies are available and learn new techniques quickly, in a rapidly-changing field.In retrospect, the assumption that specific training was required rather than an education was an idiotic response to the arrival of mainstream information technology. As a result, ICT became a disaster area, which discouraged a generation of youngsters from a career in IT, and thereby led directly to the shortage of people with the skills that are required to exploit the potential of Information Technology..Raspberry Pi aims to reverse the trend. This is a rig that is geared to fast graphics in high resolution. It is no toy. It should be a superb games machine. However, the use of Fedora, Debian, or Arch Linux ARM shows the more serious educational intent behind the Foundation's work. It looks like it will even do some office work too!So, get hold of any power supply that provides a 5VDC source at the required 700mA; an old Blackberry charger will do or, alternatively, it will run off four AA cells. You'll need a USB hub to support the mouse and keyboard, and maybe a hard drive. You'll want a DVI monitor (with audio out) or TV (sound and video). You'll also need to be able to cope with wired Ethernet 10/100, if you want networking.With this lot assembled, stick the paraphernalia on the back of the HDTV with Blu Tack, get a nice keyboard, and you have a classy Linux-based home computer. The major cost is in the T.V and the keyboard. If you're not already writing software for this platform, then maybe, at a time when some countries are talking of orders in the millions, you should consider it.

    Read the article

  • If unexpected database changes cause you problems – we can help!

    - by Chris Smith
    Have you ever been surprised by an unexpected difference between you database environments? Have you ever found that your Staging database is not the same as your Production database, even though it was the week before? Has an emergency hotfix suddenly appeared in Production over the weekend without your knowledge? Has your client secretly added a couple of indices to their local version of the database to aid performance? Worse still, has a developer ever accidently run a SQL script against the wrong database without noticing their mistake? If you’ve answered “Yes” to any of the above questions then you’ve suffered from ‘drift’. Database drift is where the state of a database (schema, particularly) has moved away from its expected or official state over time. The upshot is that the database is in an unknown or poorly-understood state. Even if these unexpected changes are not destructive, drift can be a big problem when it’s time to release a new version of the database. A deployment to a target database in an unexpected state can error and fail, potentially delaying a vital, time-sensitive update. A big issue with drift is that it can be hard to spot and it can be even harder to determine its provenance. So, before you can deal with an issue caused by drift, you’ll need to know exactly what change has been made, who made it, when they made it and why they made it. Those questions can take a lot of effort to answer. Then you actually need to decide what to do. Do you rollback the change because it was bad? Retrospectively apply it to the Staging environment because it is a required change? Or script the change into version control to get it back in line with your process? Red Gate’s Database Delivery Team have been talking to DBAs, database consultants and database developers to explore the problem of drift. We’ve started to get a really good idea of how big a problem it can be and what database professionals need to know and do, in order to deal with it.  It’s fair to say, we’re pretty excited at the prospect of creating a tool that will really help and we’ve got some great feedback on our initial ideas (see image below).   We’re now well underway with the development of our new drift-spotting product – SQL Lighthouse – and we hope to have a beta release out towards the end of July. What we really need is your help to shape the product into a great tool. So, if database drift is a problem that you’d like help solving and are interested in finding out more about our product, join our mailing list to register your interest in trying out the beta release. Subscribe to our mailing list

    Read the article

  • Operator of the Week - Spools, Eager Spool

    For the fifth part of Fabiano's mission to describe the major Showplan Operators used by SQL Server's Query Optimiser, he introduces the spool operators and particularly the Eager Spool, explains blocking and non-blocking and then describes how the Halloween Problem is avoided.

    Read the article

  • Interview with Geoff Bones, developer on SQL Storage Compress

    - by red(at)work
    How did you come to be working at Red Gate? I've been working at Red Gate for nine months; before that I had been at a multinational engineering company. A number of my colleagues had left to work at Red Gate and spoke very highly of it, but I was happy in my role and thought, 'It can't be that great there, surely? They'll be back!' Then one day I visited to catch up them over lunch in the Red Gate canteen. I was so impressed with what I found there, that, three days later, I'd applied for a role as a developer. And how did you get into software development? My first job out of university was working as a systems programmer on IBM mainframes. This was quite a while ago: there was a lot of assembler and loading programs from tape drives and that kind of stuff. I learned a lot about how computers work, and this stood me in good stead when I moved over the development in the 90s. What's the best thing about working as a developer at Red Gate? Where should I start? One of the great things as a developer at Red Gate is the useful feedback and close contact we have with the people who use our products, either directly at trade shows and other events or through information coming through the product managers. The company's whole ethos is built around assisting the user, and this is in big contrast to my previous development roles. We aim to produce tools that people really want to use, that they enjoy using, and, as a developer, this is a great thing to aim for and a great feeling when we get it right. At Red Gate we also try to cut out the things that distract and stop us doing our jobs. As a developer, this means that I can focus on the code and the product I'm working on, knowing that others are doing a first-class job of making sure that the builds are running smoothly and that I'm getting great feedback from the testers. We keep our process light and effective, as we want to produce great software more than we want to produce great audit trails. Tell us a bit about the products you are currently working on. You mean HyperBac? First let me explain a bit about what HyperBac is. At heart it's a compression and encryption technology, but with a few added features that open up a wealth of really exciting possibilities. Right now we have the HyperBac technology in just three products: SQL HyperBac, SQL Virtual Restore and SQL Storage Compress, but we're only starting to develop what it can do. My personal favourite is SQL Virtual Restore; for example, I love the way you can use it to run independent test databases that are all backed by a single compressed backup. I don't think the market yet realises the kind of things you do once you are using these products. On the other hand, the benefits of SQL Storage Compress are straightforward: run your databases but use only 20% of the disk space. Databases are getting larger and larger, and, as they do, so does your ROI. What's a typical day for you? My days are pretty varied. We have our daily team stand-up meeting and then sometimes I will work alone on a current issue, or I'll be pair programming with one of my colleagues. From time to time we give half a day up to future planning with the team, when we look at the long and short term aims for the product and working out the development priorities. I also get to go to conferences and events, which is unusual for a development role and gives me the chance to meet and talk to our customers directly. Have you noticed anything different about developing tools for DBAs rather than other IT kinds of user? It seems to me that DBAs are quite independent minded; they know exactly what the problem they are facing is, and often have a solution in mind before they begin to look for what's on the market. This means that they're likely to cherry-pick tools from a range of vendors, picking the ones that are the best fit for them and that disrupt their environments the least. When I've met with DBAs, I've often been very impressed at their ability to summarise their set up, the issues, the obstacles they face when implementing a tool and their plans for their environment. It's easier to develop products for this audience as they give such a detailed overview of their needs, and I feel I understand their problems.

    Read the article

  • SQL Server CTE Basics

    The CTE was introduced into standard SQL in order to simplify various classes of SQL Queries for which a derived table just wasn't suitable. For some reason, it can be difficult to grasp the techniques of using it. Well, that's before Rob Sheldon explained it all so clearly for us.

    Read the article

  • Reflector Pro has now been released!

    - by CliveT
    After moving into the .NET division in May , and having a great time working on Reflector, I'm pleased to say that the results of that work are now available. Reflector Pro has now been released! The old Reflector as you know and love it is still available free of charge, and as part of this project we've fixed a number of bugs in the de-compilation that have been around for a long time. The Pro version comes as an add-in for Visual Studio - this offers dynamic de-compilation and generation of pdb files which allow you to step into the de-compiled code. Alex has some good pictures of this functionality on his beta post from around a month ago. Thanks to the other guys who've worked on this for taking me along for the ride - Alex, Andrew, Bart and Jason. Stephen did some great usability work, Chris Alford did some great technical authoring and Laila handled the launch publicity. Like all projects, there's always more I'd like to have done, but what we have looks like a pretty powerful addition to the developer's set of tools to me. Please try it and give us feedback on the forum.

    Read the article

  • Taking our Friendships to the next level.

    - by RedAndTheCommunity
    Red Gate have been running the Friends of Red Gate program for years now, and over that time we've built some great relationships with some truly awesome members of the SQL and .NET communities. When I took over the running of the program from Annabel in 2011, I was overwhelmed by the enthusiasm and commitment of our Friends. There were just so many of them, however, that it was hard to make the most of the relationships we had with people, and I wanted to fix that. I decided to survey all our Friends, to find out what they wanted to get out of, and put into, being in the Friends of Red Gate (FoRG) program. From the results of that survey, I identified 30 FoRGs that were really willing and able to go that step further to help Red Gate improve their tools, improve their relationship with the community, and improve the Friends of Red Gate program. Those 30 Friends of Red Gate have been awarded 'FoRG+' status. That means they'll: Have a closer relationship with the product teams, by getting involved in projects Have even more access to the inside track about the tools they're interested in Get the opportunity to come visit us at the Red Gate office and really influence the development of the tools. Plus more, depending on how the individual FoRG+ wants to work with us. This doesn't mean I've forgotten our other Friends; I'm working on ways to improve their experience of the Friends of Red Gate program. I'll write about them in another post. If you're an existing Friend of Red Gate, and you're interested in finding out how to get involved in the FoRG+ program, then I'd love to chat to you. For anyone that's interested in joining the Friend of Red Gate program, take a look at the web page dedicated to the program, and get in touch at [email protected] to be put on the waiting list for our 2013 program.

    Read the article

  • Automated Error Reporting in .NET Reflector - harnessing the most powerful test rig in existence

    - by Alex.Davies
    I know a testing system that will find more bugs than all the unit testing, integration testing, and QA you could possibly do. And the chances are you're not using it. It's called your users. It's a cliché that you should test so that you find your bugs rather than your users. Of course you should. But it's also a cliché that no software is ever shipped bug-free. Lost cause? No, opportunity! I think .NET Reflector 6 is pretty stable. In fact I know exactly how stable it is, because some (surprisingly high) proportion of its users tell me every time it crashes: If they press "Send Error Report", I get: And then I fix it. As a rough guess, while a standard stack trace is enough to fix a problem 30% of the time, having all those local variables in the stack trace means I can fix it about 80% of the time. How does this all happen? Did it take ages to code this swish system? Nope, it was one checkbox in SmartAssembly. It adds some clever code to your assembly to capture local variables every time an exception is thrown, and to ask your user to report it to you, with a variety of other useful information. Of course not all bugs show up as exceptions. But if you get used to knowing that SmartAssembly will tell you when an exception happens, you begin to change your coding style. Now, as long as an exception gets thrown in any situation you don't expect, you'll fix it if it ever happens. You'll start throwing exceptions liberally, and stop having to think about whether tiny edge cases are possible, as long as they throw an exception if they happen.

    Read the article

  • Such thing as a free lunch

    - by red@work
    There is a lot of hard work goes on in Red Gate, no doubt. And then there are things we're asked to get involved with, that aren't hard and don't feel much like work. What? Give up our free lunch at Red Gate for. a free lunch in a pub? Within an hour, myself and a colleague are at the Railway Vue pub in nearby Impington. This is all part of Red Gate's aim to hire more Software Engineers and Test Engineers, to help Red Gate grow into one of the greatest software companies in the world (it's already the best small software development company in the UK). Phase one then - buy lunch for Cambridge. Seriously, not just the targeted engineers, but for anyone who could print the voucher and make it to the nearest of the venues, two of which happen to be pubs. We're here to watch people happily eat a free pub lunch at Red Gate's expense. We also get involved and I swear I didn't order a beer with the food but the landlord says I clearly did and I'm not one to argue. Red Gate are offering a free iPad to anyone that comes to interview for a Software Engineer or Test Engineer role. We speak to a few engineers who are genuinely interested. We speak to a couple of DBA's too, and encourage them to make speculative applications - no free iPad on offer for them, but that's not really the point. The point is, everyone should apply to work here! It's that good. We overhear someone ask if 'these vouchers really work?' They do. There's no catch. The free IPad? Again, no catch. If that's what it takes to get talented engineers through our doors for an interview, then that's all good. Once they see where we work and how we work, we think they'll want to come and work with us. The following day, Red Gate decides to repeat the offer, and that means more hard work, this time at The Castle pub. Another landlord that mishears 'mineral water' and serves me a beer. There are many more people clutching the printed vouchers and they all seem very happy to be getting a free lunch from Red Gate. "Come and work for us" we suggest, "lunch is always free!" So if you're a talented engineer, like free lunches and want a free iPad, you know what to do.

    Read the article

  • Alan Turing Needs Your Help

    - by Chris Massey
    Well. sort of. Clearly, you are using a computer. If you are on this site, you are probably quite familiar with computers as artifacts of our modern society. Hopefully, you are also familiar with the fact that Alan Turing, logician and mathematician extraordinaire, was instrumental in laying down the foundations of modern computer science, and did a little work to help turn the tide of WWII in the Allies' favor. Hold that thought. A phenomenal collection of Turing's papers (including his first ever...(read more)

    Read the article

  • .NET Demon 1.0 Released

    - by theo.spears
    Today we're officially releasing version 1.0 of .NET Demon, the Visual Studio Extension Alex Davies and I have been working on for the last 6 months. There have been beta versions available for a while, but we have now released the first "official" version and made it available to purchase. If you haven't yet tried the tool, it's all about reducing the time between when you write a line of code and when you are able to try it out so you don't have to wait: Continuous compilation We use spare CPU cycles on your machine to compile your code in the background when you make changes, so assemblies are up to date whenever you want to run them. Some clever logic means we only recompile code which may have been affected by your changes. Continuous save .NET Demon can perform background saving, so you don't lose any work in case of crashes or power failures, and are less likely to forget to commit changed files. Continuous testing (Experimental) The testing tool in .NET Demon watches which code you change in your solution, and automatically reruns tests which are impacted, so you learn about any breaking changes as quickly as possible. It also gives you inline test coverage information inside Visual Studio. Continuous testing is still experimental - it will work fine in many cases, but we know it's not yet perfect. Releasing version 1.0 doesn't mean we're pausing development or pushing out improvements. We will still be regularly providing new versions with improved functionality and fixes for any bugs people come across. Visit the .NET Demon product page to download

    Read the article

  • Developing a SQL Server Function in a Test-Harness.

    - by Phil Factor
    /* Many times, it is a lot quicker to take some pain up-front and make a proper development/test harness for a routine (function or procedure) rather than think ‘I’m feeling lucky today!’. Then, you keep code and harness together from then on. Every time you run the build script, it runs the test harness too.  The advantage is that, if the test harness persists, then it is much less likely that someone, probably ‘you-in-the-future’  unintentionally breaks the code. If you store the actual code for the procedure as well as the test harness, then it is likely that any bugs in functionality will break the build rather than to introduce subtle bugs later on that could even slip through testing and get into production.   This is just an example of what I mean.   Imagine we had a database that was storing addresses with embedded UK postcodes. We really wouldn’t want that. Instead, we might want the postcode in one column and the address in another. In effect, we’d want to extract the entire postcode string and place it in another column. This might be part of a table refactoring or int could easily be part of a process of importing addresses from another system. We could easily decide to do this with a function that takes in a table as its parameter, and produces a table as its output. This is all very well, but we’d need to work on it, and test it when you make an alteration. By its very nature, a routine like this either works very well or horribly, but there is every chance that you might introduce subtle errors by fidding with it, and if young Thomas, the rather cocky developer who has just joined touches it, it is bound to break.     right, we drop the function we’re developing and re-create it. This is so we avoid the problem of having to change CREATE to ALTER when working on it. */ IF EXISTS(SELECT * FROM sys.objects WHERE name LIKE ‘ExtractPostcode’                                      and schema_name(schema_ID)=‘Dbo’)     DROP FUNCTION dbo.ExtractPostcode GO   /* we drop the user-defined table type and recreate it */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘AddressesWithPostCodes’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.AddressesWithPostCodes GO /* we drop the user defined table type and recreate it */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘OutputFormat’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.OutputFormat GO   /* and now create the table type that we can use to pass the addresses to the function */ CREATE TYPE AddressesWithPostCodes AS TABLE ( AddressWithPostcode_ID INT IDENTITY PRIMARY KEY, –because they work better that way! Address_ID INT NOT NULL, –the address we are fixing TheAddress VARCHAR(100) NOT NULL –The actual address ) GO CREATE TYPE OutputFormat AS TABLE (   Address_ID INT PRIMARY KEY, –the address we are fixing   TheAddress VARCHAR(1000) NULL, –The actual address   ThePostCode VARCHAR(105) NOT NULL – The Postcode )   GO CREATE FUNCTION ExtractPostcode(@AddressesWithPostCodes AddressesWithPostCodes READONLY)  /** summary:   > This Table-valued function takes a table type as a parameter, containing a table of addresses along with their integer IDs. Each address has an embedded postcode somewhere in it but not consistently in a particular place. The routine takes out the postcode and puts it in its own column, passing back a table where theinteger key is accompanied by the address without the (first) postcode and the postcode. If no postcode, then the address is returned unchanged and the postcode will be a blank string Author: Phil Factor Revision: 1.3 date: 20 May 2014 example:      – code: returns:   > Table of  Address_ID, TheAddress and ThePostCode. **/     RETURNS @FixedAddresses TABLE   (   Address_ID INT, –the address we are fixing   TheAddress VARCHAR(1000) NULL, –The actual address   ThePostCode VARCHAR(105) NOT NULL – The Postcode   ) AS – body of the function BEGIN DECLARE @BlankRange VARCHAR(10) SELECT  @BlankRange = CHAR(0)+‘- ‘+CHAR(160) INSERT INTO @FixedAddresses(Address_ID, TheAddress, ThePostCode) SELECT Address_ID,          CASE WHEN start>0 THEN REPLACE(STUFF([Theaddress],start,matchlength,”),‘  ‘,‘ ‘)             ELSE TheAddress END            AS TheAddress,        CASE WHEN Start>0 THEN SUBSTRING([Theaddress],start,matchlength-1) ELSE ” END AS ThePostCode FROM (–we have a derived table with the results we need for the chopping SELECT MAX(PATINDEX([matched],‘ ‘+[Theaddress] collate SQL_Latin1_General_CP850_Bin)) AS start,         MAX( CASE WHEN PATINDEX([matched],‘ ‘+[Theaddress] collate SQL_Latin1_General_CP850_Bin)>0 THEN TheLength ELSE 0 END) AS matchlength,        MAX(TheAddress) AS TheAddress,        Address_ID FROM (SELECT –first the match, then the length. There are three possible valid matches         ‘%['+@BlankRange+'][A-Z][0-9] [0-9][A-Z][A-Z]%’, 7 –seven character postcode       UNION ALL SELECT ‘%['+@BlankRange+'][A-Z][A-Z0-9][A-Z0-9] [0-9][A-Z][A-Z]%’, 8       UNION ALL SELECT ‘%['+@BlankRange+'][A-Z][A-Z][A-Z0-9][A-Z0-9] [0-9][A-Z][A-Z]%’, 9)      AS f(Matched,TheLength) CROSS JOIN  @AddressesWithPostCodes GROUP BY [address_ID] ) WORK; RETURN END GO ——————————-end of the function————————   IF NOT EXISTS (SELECT * FROM sys.objects WHERE name LIKE ‘ExtractPostcode’)   BEGIN   RAISERROR (‘There was an error creating the function.’,16,1)   RETURN   END   /* now the job is only half done because we need to make sure that it works. So we now load our sample data, making sure that for each Sample, we have what we actually think the output should be. */ DECLARE @InputTable AddressesWithPostCodes INSERT INTO  @InputTable(Address_ID,TheAddress) VALUES(1,’14 Mason mews, Awkward Hill, Bibury, Cirencester, GL7 5NH’), (2,’5 Binney St      Abbey Ward    Buckinghamshire      HP11 2AX UK’), (3,‘BH6 3BE 8 Moor street, East Southbourne and Tuckton W     Bournemouth UK’), (4,’505 Exeter Rd,   DN36 5RP Hawerby cum BeesbyLincolnshire UK’), (5,”), (6,’9472 Lind St,    Desborough    Northamptonshire NN14 2GH  NN14 3GH UK’), (7,’7457 Cowl St, #70      Bargate Ward  Southampton   SO14 3TY UK’), (8,”’The Pippins”, 20 Gloucester Pl, Chirton Ward,   Tyne & Wear   NE29 7AD UK’), (9,’929 Augustine lane,    Staple Hill Ward     South Gloucestershire      BS16 4LL UK’), (10,’45 Bradfield road, Parwich   Derbyshire    DE6 1QN UK’), (11,’63A Northampton St,   Wilmington    Kent   DA2 7PP UK’), (12,’5 Hygeia avenue,      Loundsley Green WardDerbyshire    S40 4LY UK’), (13,’2150 Morley St,Dee Ward      Dumfries and Galloway      DG8 7DE UK’), (14,’24 Bolton St,   Broxburn, Uphall and Winchburg    West Lothian  EH52 5TL UK’), (15,’4 Forrest St,   Weston-Super-Mare    North Somerset       BS23 3HG UK’), (16,’89 Noon St,     Carbrooke     Norfolk       IP25 6JQ UK’), (17,’99 Guthrie St,  New Milton    Hampshire     BH25 5DF UK’), (18,’7 Richmond St,  Parkham       Devon  EX39 5DJ UK’), (19,’9165 laburnum St,     Darnall Ward  Yorkshire, South     S4 7WN UK’)   Declare @OutputTable  OutputFormat  –the table of what we think the correct results should be Declare @IncorrectRows OutputFormat –done for error reporting   –here is the table of what we think the output should be, along with a few edge cases. INSERT INTO  @OutputTable(Address_ID,TheAddress, ThePostcode)     VALUES         (1, ’14 Mason mews, Awkward Hill, Bibury, Cirencester, ‘,‘GL7 5NH’),         (2, ’5 Binney St   Abbey Ward    Buckinghamshire      UK’,‘HP11 2AX’),         (3, ’8 Moor street, East Southbourne and Tuckton W    Bournemouth UK’,‘BH6 3BE’),         (4, ’505 Exeter Rd,Hawerby cum Beesby   Lincolnshire UK’,‘DN36 5RP’),         (5, ”,”),         (6, ’9472 Lind St,Desborough    Northamptonshire NN14 3GH UK’,‘NN14 2GH’),         (7, ’7457 Cowl St, #70    Bargate Ward  Southampton   UK’,‘SO14 3TY’),         (8, ”’The Pippins”, 20 Gloucester Pl, Chirton Ward,Tyne & Wear   UK’,‘NE29 7AD’),         (9, ’929 Augustine lane,  Staple Hill Ward     South Gloucestershire      UK’,‘BS16 4LL’),         (10, ’45 Bradfield road, ParwichDerbyshire    UK’,‘DE6 1QN’),         (11, ’63A Northampton St,Wilmington    Kent   UK’,‘DA2 7PP’),         (12, ’5 Hygeia avenue,    Loundsley Green WardDerbyshire    UK’,‘S40 4LY’),         (13, ’2150 Morley St,     Dee Ward      Dumfries and Galloway      UK’,‘DG8 7DE’),         (14, ’24 Bolton St,Broxburn, Uphall and Winchburg    West Lothian  UK’,‘EH52 5TL’),         (15, ’4 Forrest St,Weston-Super-Mare    North Somerset       UK’,‘BS23 3HG’),         (16, ’89 Noon St,  Carbrooke     Norfolk       UK’,‘IP25 6JQ’),         (17, ’99 Guthrie St,      New Milton    Hampshire     UK’,‘BH25 5DF’),         (18, ’7 Richmond St,      Parkham       Devon  UK’,‘EX39 5DJ’),         (19, ’9165 laburnum St,   Darnall Ward  Yorkshire, South     UK’,‘S4 7WN’)       insert into @IncorrectRows(Address_ID,TheAddress, ThePostcode)        SELECT Address_ID,TheAddress,ThePostCode FROM dbo.ExtractPostcode(@InputTable)       EXCEPT     SELECT Address_ID,TheAddress,ThePostCode FROM @outputTable; If @@RowCount>0        Begin        PRINT ‘The following rows gave ‘;     SELECT Address_ID,TheAddress,ThePostCode FROM @IncorrectRows        RAISERROR (‘These rows gave unexpected results.’,16,1);     end   /* For tear-down, we drop the user defined table type */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘OutputFormat’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.OutputFormat GO /* once this is working, the development work turns from a chore into a delight and one ends up hitting execute so much more often to catch mistakes as soon as possible. It also prevents a wildly-broken routine getting into a build! */

    Read the article

  • Automated Error Reporting = More Robust Software

    - by Laila
    I would like to tell you how to revolutionize your software development process </marketing hyperbole> On a more serious note, we (Red Gate's .NET Development team) recently rolled a new tool into our development process which has made our lives dramatically easier AND improved the quality of our software, and I (& one of our developers, Alex Davies) just wanted to take a quick moment to share the love. I work with a development team that takes pride in what they ship, so we take software testing rather seriously. For every development project we run, we allocate at least one software tester for every two developers, and we never ship software without first shipping early access releases and betas to get user feedback. And therein lies the challenge -encouraging users to provide consistent, useful feedback is a headache, but without that feedback, improving the software is. tricky. Until fairly recently, we used the standard (if long-winded) approach of receiving bug reports of variable quality via email or through our support forums. If that didn't give us enough information to reproduce the problem - which was most of the time - we had to enter into a time-consuming to-and-fro conversation with the end-user, to get scrape together the data we needed to work out where the problem lay. As I'm sure you're aware, this is painfully slow. To the delight of the team, we recently got to work with SmartAssembly, which lets us embed automated exception and error reporting into our software with very little pain, and we decided to do a little dogfooding. As a result, we've have made a really handy (if perhaps slightly obvious) discovery: As soon as we release a beta, or indeed any release of software, we now get tonnes of customer feedback through automated error reports. Making this process easier for our users has dramatically increased the amount (and quality) of feedback we get. From their point of view, they get an experience similar to Microsoft's error reporting, and process is essentially idiot-proof. From our side of things, we can now react much faster to the information we get, fixing the bugs and shipping a new-and-improved release, which our users rather appreciate. Smiles and hugs all round. Even more so because, as we're use SmartAssembly's Automated Error Reporting, we get to avoid having to spend weeks building an exception reporting mechanism. It takes just a few minutes to add reporting to a project, and we get a bunch of useful information back, like a stack trace and the values of all the local variables, which we can use to fix bugs. Happily, "Automated Error Reporting = More Robust Software" can actually be read two ways: we've found that we not only ship higher quality software, but we also release within a shorter time. We can ship stable software that our users are happy to upgrade to, and we then bask in the glory of lots of positive customer feedback. Once we'd starting working with SmartAssembly, we were curious to know how widespread error reporting was as a practice. Our product manager ran a survey in autumn last year, and found that 40% of software developers never really considered deploying error reporting. Considering how we've now got plenty of experience on the subject, one of our dev guys, Alex Davies, thought we should share what we've learnt, and he's kindly offered to host a webinar on delivering robust software with Automated Error Reporting. Drawing on our own in-house development experiences, he'll cover how to add error reporting to your program, how to actually use the error reports to fix bugs (don't snigger, not everyone's as bright as you), how to customize the error report dialog that your users see, and how to automatically get log files from your users' machine. The webinar will take place on Jan 25th (that's next week). It's free to attend, but you'll still need to register to hear Alex's dulcet tones.

    Read the article

  • Tuning Red Gate: #1 of Many

    - by Grant Fritchey
    Everyone runs into performance issues at some point. Same thing goes for Red Gate software. Some of our internal systems were running into some serious bottlenecks. It just so happens that we have this nice little SQL Server monitoring tool. What if I were to, oh, I don't know, use the monitoring tool to identify the bottlenecks, figure out the causes and then apply a fix (where possible) and then start the whole thing all over again? Just a crazy thought. OK, I was asked to. This is my first time looking through these servers, so here's how I'd go about using SQL Monitor to get a quick health check, sort of like checking the vitals on a patient. First time opening up our internal SQL Monitor instance and I was greeted with this: Oh my. Maybe I need to get our internal guys to read my blog. Anyway, I know that there are two servers where most of the load is. I'll drill down on the first. I'm selecting the server, not the instance, by clicking on the server name. That opens up the Global Overview page for the server. The information here much more applicable to the "oh my gosh, I have a problem now" type of monitoring. But, looking at this, I am seeing something immediately. There are four(4) drives on the system. The C:\ has an average read time of 16.9ms, more than double the others. Is that a problem? Not sure, but it's something I'll look at. It's write time is higher too. I'll keep drilling down, first, to the unclosed alerts on the server. Now things get interesting. SQL Monitor has a number of different types of alerts, some related to error states, others to service status, and then some related to performance. Guess what I'm seeing a bunch of right here: Long running queries and long job durations. If you check the dates, they're all recent, within the last 24 hours. If they had just been old, uncleared alerts, I wouldn't be that concerned. But with all these, all performance related, and all in the last 24 hours, yeah, I'm concerned. At this point, I could just start responding to the Alerts. If I click on one of the the Long-running query alerts, I'll get all kinds of cool data that can help me determine why the query ran long. But, I'm not in a reactive mode here yet. I'm still gathering data, trying to understand how the server works. I have the information that we're generating a lot of performance alerts, let's sock that away for the moment. Instead, I'm going to back up and look at the Global Overview for the SQL Instance. It shows all the databases on the server and their status. Then it shows a number of basic metrics about the SQL Server instance, again for that "what's happening now" view or things. Then, down at the bottom, there is the Top 10 expensive queries list: This is great stuff. And no, not because I can see the top queries for the last 5 minutes, but because I can adjust that out 3 days. Now I can see where some serious pain is occurring over the last few days. Databases have been blocked out to protect the guilty. That's it for the moment. I have enough knowledge of what's going on in the system that I can start to try to figure out why the system is running slowly. But, I want to look a little more at some historical data, to understand better how this server is behaving. More next time.

    Read the article

  • A Community Cure for a String Splitting Headache

    - by Tony Davis
    A heartwarming tale of dogged perseverance and Community collaboration to solve some SQL Server string-related headaches. Michael J Swart posted a blog this week that had me smiling in recognition and agreement, describing how an inquisitive Developer or DBA deals with a problem. It's a three-step process, starting with discomfort and anxiety; a feeling that one doesn't know as much about one's chosen specialized subject as previously thought. It progresses through a phase of intense research and learning until finally one achieves breakthrough, blessed relief and renewed optimism. In this case, the discomfort was provoked by the mystery of massively high CPU when searching Unicode strings in SQL Server. Michael explored the problem via Stack Overflow, Google and Twitter #sqlhelp, finally leading to resolution and a blog post that shared what he learned. Perfect; except that sometimes you have to be prepared to share what you've learned so far, while still mired in the phase of nagging discomfort. A good recent example of this recently can be found on our own blogs. Despite being a loud advocate of the lightning fast T-SQL-based string splitting techniques, honed to near perfection over many years by Jeff Moden and others, Phil Factor retained a dogged conviction that, in theory, shredding element-based XML using XQuery ought to be even more efficient for splitting a string to create a table. After some careful testing, he found instead that the XML way performed and scaled miserably by comparison. Somewhat subdued, and with a nagging feeling that perhaps he was still missing "something", he posted his findings. What happened next was a joy to behold; the community jumped in to suggest subtle changes in approach, using an attribute-based rather than element-based XML list, and tweaking the XQuery shredding. The result was performance and scalability that surpassed all other techniques. I asked Phil how quickly he would have arrived at the real breakthrough on his own. His candid answer was "never". Both are great examples of the power of Community learning and the latter in particular the importance of being brave enough to parade one's ignorance. Perhaps Jeff Moden will accept the string-splitting gauntlet one more time. To quote the great man: you've just got to love this community! If you've an interesting tale to tell about being helped to a significant breakthrough for a problem by the community, I'd love to hear about it. Cheers, Tony.

    Read the article

  • Generate a merge statement from table structure

    - by Nigel Rivett
    This code generates a merge statement joining on he natural key and checking all other columns to see if they have changed. The full version deals with type 2 processing and an audit trail but this version is useful. Just the insert or update part is handy too. Change the table at the top (spt_values in master in the version) and the join columns for the merge in @nk. The output generated is at the top and the code to run to generate it below. Output merge spt_values a using spt_values b on a.name = b.name and a.number = b.number and a.type = b.type when matched and (1=0 or (a.low b.low) or (a.low is null and b.low is not null) or (a.low is not null and b.low is null) or (a.high b.high) or (a.high is null and b.high is not null) or (a.high is not null and b.high is null) or (a.status b.status) or (a.status is null and b.status is not null) or (a.status is not null and b.status is null) ) then update set low = b.low , high = b.high , status = b.status when not matched by target then insert ( name , number , type , low , high , status ) values ( b.name , b.number , b.type , b.low , b.high , b.status ); Generator set nocount on declare @t varchar(128) = 'spt_values' declare @i int = 0 -- this is the natural key on the table used for the merge statement join declare @nk table (ColName varchar(128)) insert @nk select 'Number' insert @nk select 'Name' insert @nk select 'Type' declare @cols table (seq int, nkseq int, type int, colname varchar(128)) ;with cte as ( select ordinal_position, type = case when columnproperty(object_id(@t), COLUMN_NAME,'IsIdentity') = 1 then 3 when nk.ColName is not null then 1 else 0 end, COLUMN_NAME from information_schema.columns c left join @nk nk on c.column_name = nk.ColName where table_name = @t ) insert @cols (seq, nkseq, type, colname) select ordinal_position, row_number() over (partition by type order by ordinal_position) , type, COLUMN_NAME from cte declare @result table (i int, j int, k int, data varchar(500)) select @i = @i + 1 insert @result (i, data) select @i, 'merge ' + @t + ' a' select @i = @i + 1 insert @result (i, data) select @i, ' using cte b' select @i = @i + 1 insert @result (i, j, data) select @i, nkseq, ' ' + case when nkseq = 1 then 'on' else 'and' end + ' a.' + ColName + ' = b.' + ColName from @cols where type = 1 select @i = @i + 1 insert @result (i, data) select @i, ' when matched and (1=0' select @i = @i + 1 insert @result (i, j, k, data) select @i, seq, 1, ' or (a.' + ColName + ' b.' + ColName + ')' + ' or (a.' + ColName + ' is null and b.' + ColName + ' is not null)' + ' or (a.' + ColName + ' is not null and b.' + ColName + ' is null)' from @cols where type 1 select @i = @i + 1 insert @result (i, data) select @i, ' )' select @i = @i + 1 insert @result (i, data) select @i, ' then update set' select @i = @i + 1 insert @result (i, j, data) select @i, nkseq, ' ' + case when nkseq = 1 then ' ' else ', ' end + colname + ' = b.' + colname from @cols where type = 0 select @i = @i + 1 insert @result (i, data) select @i, ' when not matched by target then insert' select @i = @i + 1 insert @result (i, data) select @i, ' (' select @i = @i + 1 insert @result (i, j, data) select @i, seq, ' ' + case when seq = 1 then ' ' else ', ' end + colname from @cols where type 3 select @i = @i + 1 insert @result (i, data) select @i, ' )' select @i = @i + 1 insert @result (i, data) select @i, ' values' select @i = @i + 1 insert @result (i, data) select @i, ' (' select @i = @i + 1 insert @result (i, j, data) select @i, seq, ' ' + case when seq = 1 then ' ' else ', ' end + 'b.' + colname from @cols where type 3 select @i = @i + 1 insert @result (i, data) select @i, ' );' select data from @result order by i,j,k,data

    Read the article

< Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >