Search Results

Search found 28841 results on 1154 pages for 'simple as could be'.

Page 119/1154 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Azure

    - by Grant Fritchey
    I've been tasked to learn SQL Azure, as well as test all the Red Gate products on it. My one, BIG, fear has been that I'll receive some mongo bill in the mail because I've exceeded the MSDN testing limit. I know people that have had that problem. I've been trying to keep an eye on my usage, but, let's face it, it's not something I think about every day. But now I don't have to. Red Gate has been working with Azure, long before I showed up. They already released a little piece of software that I just found out about, it's called CloudTally. It gathers your usage and sends you a daily email so you can know if you're starting to approach that limit. Check it out, it's free.

    Read the article

  • Expanding Influence and Community

    - by Johnm
    When I was just nine years of age my father introduced me to the computer. It was a Radio Shack TRS-80 Color Computer (aka: CoCo). He shared with me the nuances of writing BASIC and it wasn't long before I was in the back seat of the school bus scribbling, on a pad of paper, the code I would later type. My father demonstrated that while my friends were playing their Atari 2600 consoles, I had the unique opportunity to create my own games on the Coco. One of which provided a great friend of mine hours and hours of hilarity and entertainment. It wasn't long before my father was inviting me to tag along as he drove to the local high school where a gathering of fellow Coco enthusiasts assembled. In these meetings all in attendance would chat about their latest challenges and solutions. They would swap the labors of their sleepless nights eagerly gazing into their green and black screens. Friendships were built and business partners were developed. While these experiences at the time in my pre-teen mind were chalked up to simply sharing time with my father, it had a tremendous impact on me later in life. This past weekend I attended the Louisville SQL Saturday (#45). It was great to see that there were some who brought along their children. It is encouraging to see fresh faces in the crowd at our  monthly IndyPASS meetings. Each time I see the youthful eyes peering from the audience while the finer details of SQL Server is presented, I cannot help but to be transported back to the experiences that I enjoyed in those Coco days. It is exciting to think of how these experiences are impacting their lives and stimulating their minds. Some of these children have actually approached me asking questions about what was presented or simply bragging about their latest discovery in programming. One of the topics that arose in the "Women in Technology" session in Louisville, which was masterfully facilitated by Kathi Kellenberger, was exploring how we could ignite the spark of interest in databases among the youth. It was awesome to hear that there were some that volunteer their time to share their experiences with students. It made me wonder what user groups could achieve if we were to consider expanding our influence and community beyond our immediate peers to include those who are simply enjoying their time with their father or mother.

    Read the article

  • Building a List of All SharePoint Timer Jobs Programmatically in C#

    - by Damon Armstrong
    One of the most frustrating things about SharePoint is that the difficulty in figuring something out is inversely proportional to the simplicity of what you are trying to accomplish.  Case in point, yesterday I wanted to get a list of all the timer jobs in SharePoint.  Having never done this nor having any idea of exactly how to do this right off the top of my head, I inquired to Google.  I like to think my Google-fu is fair to good, so I normally find exactly what I’m looking for in the first hit.  But on the topic of listing all SharePoint timer jobs all it came up with a PowerShell script command (Get-SPTimerJob) and nothing more. Refined search after refined search continued to turn up nothing. So apparently I am the only person on the planet who needs to get a list of the timer jobs in C#.  In case you are the second person on the planet who needs to do this, the code to do so follows: SPSecurity.RunWithElevatedPrivileges(() => {    var timerJobs = new List();    foreach (var job in SPAdministrationWebApplication.Local.JobDefinitions)    {       timerJobs.Add(job);    }    foreach (SPService curService in SPFarm.Local.Services)    {       foreach (var job in curService.JobDefinitions)       {          timerJobs.Add(job);       }     } }); For reference, you have the two for loops because the Central Admin web application doesn’t end up being in the SPFarm.Local.Services group, so you have to get it manually from the SPAdministrationWebApplication.Local reference.

    Read the article

  • Creating WPF Prototypes with SketchFlow

    Prototyping with Sketchflow transforms what was once a frustrating and time-consuming chore. With SketchFlow, WPF prototypes can be created and changed with amazing ease. SketchFlow is WPF's secret weapon. Well, it was secret until Michael Sorens produced this article.

    Read the article

  • Embracing Imperfection

    - by Johnm
    The pursuit of perfection is a road on which we often find ourselves traveling. It is an unpaved road filed with pot-holes and ruts that often destroy our stride. The shoulders of this road are lined with the bones and rotting carcasses of well planned projects, solutions and dreams of others who have dared the journey. Often the choice to engage in this travel is a compulsive one. We can't help but to pack our bags and make the trip. We justify it by equating it to the delivery of a quality product or service. We use our past travels as validation of our worthiness and value. Our shared experience, as tortured pilgrims of perfection, reveals that each odyssey that bewitched us resulted in a stark reminder of the very weaknesses and fears that we were attempting to mollify. The voice of the critic that berated us for the lack of craftsmanship was our own. Although, at the end of the journey our own critical voice was joined by the gnashing of teeth of those who could not reap the fruit of your labor due to its lack of timely delivery. There is another road in which to travel. It is the pursuit of embracing imperfection. The cost of traveling this route is your contribution to its eternal construction. Each segment is designed uniquely. At times it has the appearance of a patchwork quilt; while other times it is well organized and highly measured. In all cases, its construction has continually advanced and been utilized as each segment was delivered by its architect. Those who choose to select this spindle of these crossroads crack open the shells of their fears to reveal the vapor that is within. They construct their houses upon these shells. Through their hunger for mastery they wring every drop of nectar from failure and discard its husks to the ditches of this road. Through their efforts the thoroughfare begins to develop a personality of its own, a beautifully human one, rich with the strengths and weaknesses of all of its contributors. Like many of us, the pursuit of perfection has not served me well. In fact, I would say that it has been more damaging than it has been helpful. While the perfectionist in me occasionally makes its presence known, I consider myself a "recovering perfectionist". It is evident to me that there is immense beauty found in imperfection. I choose to embrace it. It is grounding. It is constructive. It is honest.

    Read the article

  • SQL Saturday and Exploring Data Privacy

    - by Johnm
    I have been highly impressed with the growth of the SQL Saturday phenomenon. It seems that an announcement for a new wonderful event finds its way to my inbox on a daily basis. I have had the opportunity to attend the first of the SQL Saturday's for Tampa, Chicago, Louisville and recently my home town of Indianapolis. It is my hope that there will be many more in my future. This past weekend I had the honor of being selected to speak amid a great line up of speakers at SQL Saturday #82 in Indianapolis. My session topic/title was "Exploring Data Privacy". Below is a brief synopsis of my session: Data Privacy in a Nutshell        - Definition of data privacy        - Examples of personally identifiable data        - Examples of Sensitive data Laws and Stuff        - Various examples of laws, regulations and policies that influence the definition of data privacy        - General rules of thumb that encompasses most laws Your Data Footprint        - Who has personal information about you?        - What are you exchanging data privacy for?        - The amazing resilience of data        - The cost of data loss Weapons of Mass Protection       - Data classification       - Extended properties       - Database Object Schemas       - An extraordinarily brief introduction of encryption       - The amazing data professional  <-the most important point of the entire session! The subject of data privacy is one that is quickly making its way to the forefront of the mind of many data professionals. Somewhere out there someone is storing personally identifiable and other sensitive data about you. In some cases it is kept reasonably secure. In other cases it is kept in total exposure without the consideration of its potential of damage to you. Who has access to it and how is it being used? Are we being unnecessarily required to supply sensitive data in exchange for products and services? These are just a few questions on everyone's mind. As data loss events of grand scale hit the headlines in a more frequent succession, the level of frustration and urgency for a solution increases. I assembled this session with the intent to raise awareness of sensitive data and remind us all that we, data professionals, are the ones who have the greatest impact and influence on how sensitive data is regarded and protected. Mahatma Gandhi once said "Be the change you want to see in the world." This is guidance that I keep near to my heart as I approached this topic of data privacy.

    Read the article

  • Sharing My Thoughts on Space Flight

    - by Grant Fritchey
    This went out in the DBA newsletter from Red Gate, but I enjoyed writing it so much, I thought I'd share it to a wider audience: I grew up watching the US space program. I watched men walk on the moon for the first time in 1969, when I was only six years old. From that moment on, I dreamed of going into space. I studied aeronautics and tried to get into the Air Force Academy, all in preparation for my long career as an astronaut. Clearly, that didn't quite work out for me. But it sure could for you. At Red Gate, we're running a new contest: DBA in Space. The prize is a sub-orbital flight. When I first got word of this contest, my immediate response was, "And you need me to go right away and do a test flight? Excellent!" No, no test flight needed, plus I was pretty low on the list of volunteers. "That's OK, I'll just enter." Then I was told that, as a Red Gate employee, I couldn't win. My next response was, "I quit".eventually, I was talked down off the ledge, and agreed to help make this special for some other DBA. Many (most?) of us are science fiction fans, either the soft science of Star Trek and Star Wars, or the hard science of Niven and Pournelle, or Allen Steele. We watched the Shuttles go up and land. We've been dreaming of our own trips into orbit and our vacation-home on the Moon for a long, long time. All that might not arrive on schedule, but you've got a shot at breaking clear of the atmosphere. The first stage is a video quiz, starring Brad McGehee, and it's live at www.DBAinSpace.com now. Go for it. Good luck and God speed!

    Read the article

  • Feature Usage Reporting in Early Access Programs

    After doing Web development, you can get very used to the luxury of having basic information about your users' machines and browsers. With their permission, you can also get the same information from an application, and can even get more targeted anonymous information that will tell you how the features are used. Kevin explains how this can be used with early access builds to improve the reliability and usability of applications.

    Read the article

  • Modernizr Rocks HTML5

    - by Laila
    HTML5 is a moving target.  At the moment, we don't know what will be in future versions.  In most circumstances, this really matters to the developer. When you're using Adobe Air, you can be reasonably sure what works, what is there, and what isn't, since you have a version of the browser built-in. With Metro, you can assume that you're going to be using at least IE 10.   If, however,  you are using HTML5 in a web application, then you are going to rely heavily on Feature Detection.  Feature-Detection is a collection of techniques that tell you, via JavaScript, whether the current browser has this feature natively implemented or not Feature Detection isn't just there for the esoteric stuff such as  Geo-location,  progress bars,  <canvas> support,  the new <input> types, Audio, Video, web workers or storage, but is required even for semantic markup, since old browsers make a pigs ear out of rendering this.  Feature detection can't rely just on reading the browser version and inferring from that what works. Instead, you must use JavaScript to check that an HTML5 feature is there before using it.  The problem with relying on the user-agent is that it takes a lot of historical data  to work out what version does what, and, anyway, the user-agent can be, and sometimes is, spoofed. The open-source library Modernizr  is just about the most essential  JavaScript library for anyone using HTML5, because it provides APIs to test for most of the CSS3 and HTML5 features before you use them, and is intelligent enough to alter semantic markup into 'legacy' 'markup  using shims  on page-load  for old browsers. It also allows you to check what video Codecs are installed for playing video. It also provides media queries  and conditional resource-loading (formerly YepNope.js.).  Generally, Modernizr gives you the choice of what you do about browsers that don't support the feature that you want. Often, the best choice is graceful degradation, but the resource-loading feature allows you to dynamically load JavaScript Shims to replace the standard API for missing or defective HTML5 functionality, called 'PolyFills'.  As the Modernizr site says 'Yes, not only can you use HTML5 today, but you can use it in the past, too!' The evolutionary progress of HTML5  requires a more defensive style of JavaScript programming where the programmer adopts a mindset of fearing the worst ( IE 6)  rather than assuming the best, whilst exploiting as many of the new HTML features as possible for the requirements of the site or HTML application.  Why would anyone want the distraction of developing their own techniques to do this when  Modernizr exists to do this for you? Laila

    Read the article

  • Book Review: Defensive Database Programming With SQL Server

    It distils a great deal of practical experience; the writing of it was a considerable task; It packs in a great deal of information. Alex's book shows how to write robust database applications, and we can all learn from it. We took the book to a critic who never minces his words, and were relieved to find that Joe Celko liked it.

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • What Counts for A DBA - Logic

    - by drsql
    "There are 10 kinds of people in the world. Those who will always wonder why there are only two items in my list and those who will figured it out the first time they saw this very old joke."  Those readers who will give up immediately and get frustrated with me for not explaining it to them are not likely going to be great technical professionals of any sort, much less a programmer or administrator who will be constantly dealing with the common failures that make up a DBA's day.  Many of these people will stare at this like a dog staring at a traffic signal and still have no more idea of how to decipher the riddle. Without explanation they will give up, call the joke "stupid" and, feeling quite superior, walk away indignantly to their job likely flipping patties of meat-by-product. As a data professional or any programmer who has strayed  to this very data-oriented blog, you would, if you are worth your weight in air, either have recognized immediately what was going on, or felt a bit ignorant.  Your friends are chuckling over the joke, but why is it funny? Unfortunately you left your smartphone at home on the dresser because you were up late last night programming and were running late to work (again), so you will either have to fake a laugh or figure it out.  Digging through the joke, you figure out that the word "two" is the most important part, since initially the joke mentioned 10. Hmm, why did they spell out two, but not ten? Maybe 10 could be interpreted a different way?  As a DBA, this sort of logic comes into play every day, and sometimes it doesn't involve nerdy riddles or Star Wars folklore.  When you turn on your computer and get the dreaded blue screen of death, you don't immediately cry to the help desk and sit on your thumbs and whine about not being able to work. Do that and your co-workers will question your nerd-hood; I know I certainly would. You figure out the problem, and when you have it narrowed down, you call the help desk and tell them what the problem is, usually having to explain that yes, you did in fact try to reboot before calling.  Of course, sometimes humility does come in to play when you reach the end of your abilities, but the ‘end of abilities’ is not something any of us recognize readily. It is handy to have the ability to use logic to solve uncommon problems: It becomes especially useful when you are trying to solve a data-related problem such as a query performance issue, and the way that you approach things will tell your coworkers a great deal about your abilities.  The novice is likely to immediately take the approach of  trying to add more indexes or blaming the hardware. As you become more and more experienced, it becomes increasingly obvious that performance issues are a very complex topic. A query may be slow for a myriad of reasons, from concurrency issues, a poor query plan because of a parameter value (like parameter sniffing,) poor coding standards, or just because it is a complex query that is going to be slow sometimes. Some queries that you will deal with may have twenty joins and hundreds of search criteria, and it can take a lot of thought to determine what is going on.  You can usually figure out the problem to almost any query by using basic knowledge of how joins and queries work, together with the help of such things as the query plan, profiler or monitoring tools.  It is not unlikely that it can take a full day’s work to understand some queries, breaking them down into smaller queries to find a very tiny problem. Not every time will you actually find the problem, and it is part of the process to occasionally admit that the problem is random, and everything works fine now.  Sometimes, it is necessary to realize that a problem is outside of your current knowledge, and admit temporary defeat: You can, at least, narrow down the source of the problem by looking logically at all of the possible solutions. By doing this, you can satisfy your curiosity and learn more about what the actual problem was. For example, in the joke, had you never been exposed to the concept of binary numbers, there is no way you could have known that binary - 10 = decimal - 2, but you could have logically come to the conclusion that 10 must not mean ten in the context of the joke, and at that point you are that much closer to getting the joke and at least won't feel so ignorant.

    Read the article

  • Continuous Delivery and the Database

    Continuous Delivery is fairly generally understood to be an effective way of tackling the problems of software delivery and deployment by making build, integration and delivery into a routine. The way that databases fit into the Continuous Delivery story has been less-well defined. Phil Factor explains why he's an enthusiast for databases being full participants, and suggests practical ways of doing so.

    Read the article

  • Antenna Aligner part 1: In the beginning.

    - by Chris George
    Picture the scene, it's 9pm, I'm in my caravan (yes I know, I've heard all the jokes!) with my family and I'm trying to tune the tv by moving the aerial, retuning, moving the aerial again, retuning... 45 mins and much cursing later I succeed. Surely there must be an easier way than this? Aha, an app; there must be an app for that? So I search in the AppStore for such an app, but curiously drew a blank. Then the seeds of the idea started to grow. I can code, I work in a software house with lots of very clever people, surely I can make an app that points to the nearest digital tv transmitter! Not having looked into app development before, I investigated how one goes about making an iPhone app and was quickly greeted by a now familiar answer "Buy a mac!". That was not an option for many reasons, mostly wife related! My dreams were starting to fade until one of my colleagues pointed out that within Red Gate, the very company I work for, there was on-going development on a piece of software that would allow me to write an app using Visual Studio on a Windows machine, Nomad! Once I signed up for the beta program I got to work learning the Jquery mobile / Phonegap framework. Within a couple of hours I had written (in Visual Studio), built in the cloud (using Nomad) and published (via TestFlight) my first iPhone app onto my iPhone ! It didn't do much, but it was a step in the right direction. To be continued...

    Read the article

  • Collecting the Information in the Default Trace

    The default trace is still the best way of getting important information to provide a security audit of SQL Server, since it records such information as logins, changes to users and roles, changes in object permissions, error events and changes to both database settings and schemas. The only trouble is that the information is volatile. Feodor shows how to squirrel the information away to provide reports, check for unauthorised changes and provide forensic evidence.

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • Network Administrators Past, Present, and Future

    Even in the short time that PCs have been signficantly networked, what it means to be a SysAdmin has changed dramatically. From the first LAN parties to the lumbering infrastructures of today, the role of SysAdmin has evolved and adapted to the shifting needs of users and corporations alike. Brien Posey has been on the front line of it all, and considers the future of this thus-far essential IT role.

    Read the article

  • The Mindset of the Enterprise DBA: Creating and Applying Standards to Our Work

    Although many professions, such as pilots, surgeons and IT administrators, require judgement and skill, they also require the ability to do many repeated standard procedures in a consistent and methodical manner. These procedures leave little room for creativity since they must be done right, and in the right order. For DBAs, standardization involves providing and following checklists, notes and instructions so that the results are predictable, correct and easy to maintain

    Read the article

  • 3 tips for SQL Azure connection perfection

    - by Richard Mitchell
    One of my main annoyances when dealing with SQL Azure is of course the occasional connection problems that communicating to a cloud database entails. If you're used to programming against a locally hosted SQL Server box this can be quite a change and annoying like you wouldn't believe. So after hitting the problem again in http://cloudservices.red-gate.com  I thought I'd write a little post to remind myself how I've got it working, I don't say it's right but at least "it works on my machine" Tip...(read more)

    Read the article

  • Consolidating SQL Server Error Logs from Multiple Instances Using SSIS

    SQL Server hides a lot of very useful information in its error log files. Unfortunately, the process of hunting through all these logs, file-by-file, server-by-server, can cause a problem. Rodney Landrum offers a solution which will allow you to pull error log records from multiple servers into a central database, for analysis and reporting with T-SQL.

    Read the article

  • An Introduction to ASP.NET MVC Extensibility

    Because ASP.NET MVC has been designed with extensibility as its design principle; almost every logical step of the processing pipeline can be replaced with your own implementation. In fact, the best way to develop applications with ASP.NET MVC is to extend the system, Simone starts a series that explains how to implement extensions to ASP.NET MVC, starting with the ones at the beginning of the pipeline (routing extensions) and finishing with the view extensions points.

    Read the article

  • The Cobra Programming Language

    There are suddenly a number of strong alternatives to C# or VB. F#, IronPython and Iron Ruby are now joined by an open-source alternative called Cobra. Phil is taken by surprise at a language that is so intuitive to use that it is almost like pseudocode.

    Read the article

  • Another VSeWSS Error Resolved (List Template not installed on Farm)

    - by Damon
    Ran into a minor snag today trying to deploy a project with VSeWSS 1.3 - during the deployment it gave me the following error: Error    32    VSeWSS Service Error: Feature '2ade6552-200e-4425-8af5-f1f50c115b7e' for list template '10001' is not installed in this farm.  The operation could not be completed. At first glance, it looked my features were not installing in the correct order because the solution was installing a list that required a custom list definition before the custom list definition was being installed.  After switching the order in the WSP view (View -> Other Windows -> WSP View) -- you can use the up and down arrows on the view pane to switch feature installation order - I had the same error. I decided to try deleting the list, but upon visiting the list in the web interface I received a similar error about how the feature was not installed on the farm.  As such, I could not delete the list through the web interface.  Fortunately, the stsadm.exe tool worked just fine: stsadm.exe -o forcedeletelist -url <urltolisthere>

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >