Search Results

Search found 28682 results on 1148 pages for 'drop down menu'.

Page 700/1148 | < Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >

  • The previous system shutdown at xxxx was unexpected

    - by m.edmondson
    For the past two nights we had a remote server shutdown unexpectedly. When rebooted we get the following message: Event Type: Error Event Source: EventLog Event Category: None Event ID: 6008 Date: 16/02/2011 Time: 09:10:43 User: N/A Computer: WELPLAN-1 Description: The previous system shutdown at 07:27:32 on 16/02/2011 was unexpected. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: db 07 02 00 03 00 10 00 Û....... 0008: 07 00 1b 00 20 00 42 02 .... .B. 0010: db 07 02 00 03 00 10 00 Û....... 0018: 07 00 1b 00 20 00 42 02 .... .B. Obviously this message doesn't help much, but what does all the hexadecimal mean? Will it help me track down the problem? Any pointers as to where to look?

    Read the article

  • Synaptics touchpad scroll doesn't work on many applications

    - by Juliana Pena
    The Synaptics touchpad in my new laptop doesn't scroll at all in a number of applications, including Steam and Zune. I've narrowed the problem down to Synaptics not sending mouse wheel messages, but rather manipulating scroll bars. I'm only looking into one-finger side-of-touchpad scrolling, not two-finger scrolling. Is there a way to make it send wheel messages so that it is compatible with all applications? Configuration: Windows 7 64-bit Dell XPS 14 Latest Synaptics driver (have tried both from Synaptics and Dell) My Synaptics options window looks like this and I've tried moving all the options to no avail:

    Read the article

  • SharePoint Content Type Cheat Sheet

    - by Bil Simser
    PrincipleAny application or solution built in SharePoint must use a custom content type over adding columns to lists. The only exception to this is one-off solutions that have no life-cycle, proof-of-concepts, etc.Creating Content TypesWeb UI. Not portable, POC onlyC# or Declarative (XML). Must deploy these as FeaturesRuleDo not chagne the base XML for a Content Type after deploying. The only exception to this rule is that you can re-deploy a modified Content Type definition only after completely removing it from the environment (either programatically or by hand).Updating Content TypesUpdate and push down to child typesWeb UI. Manual for each environment. Document steps required for repeatability.Feature Upgrade. Preferred solution.C#. If you created the content type through code you might want to go this route. Create new modified Content Types and hide the old one. Not recommended but useful for legacy.ReferencesCreate Custom Content  Types in SharePoint 2010 (C#)Content Type Definitions  (XML)Creating Content Types (XML  and C#)Updating ApproachesUpdating Child Content TypesAgree or disagree?

    Read the article

  • How to clear stuck locked maildrop pop3 process

    - by Joshua
    I am using cyrus for imap and pop One of my users is getting the following error: Unable to lock maildrop : Mailbox is locked by POP server. I can see where it starts in the log. I've read that there is no physical lock file anymore (i've tried looking for it anyways) and that the solution is to just wait for the timeout, or kill the offending pop3 process. I know that this is happening because of a lossy connection on the part of the affected user, and that pop3 can only have 1 session active at a time. I need to manually clear the lock and I am having trouble finding the offending pop process. I have tried lsof, but it doesn't say how long the individual files (sockets) have been opened for. I've reduced the tcp keepalive time down to 5 mins, but I still need to reset this guy's lock. I could use some pointers. Thanks!

    Read the article

  • OS/X Mavericks won't download in App Store

    - by Mike Christensen
    I tried to download OS/X Mavericks through the App Store yesterday. However, I had to leave work before it was done downloading and shut down the machine while it was downloading. I assumed I'd be able to resume the download, or start over again later. However, now there's no way to get it to download again. When I find OS/X Mavericks in the store, I click on the "Download" link. It asks for my Apple ID which I enter, and it does absolutely nothing. It doesn't begin downloading, and it just still shows the "Download" button. Clicking on it again does nothing. I'm thinking there's something stuck somewhere, like some flag that says this download is in progress or it has already been downloaded. Is there some way to clear that out so I can start downloading it again?

    Read the article

  • Keyboard freezes / stuck if a key pressed repeatedly

    - by Aziz Rahmad
    I use Acer 4530. This problem has happened long since Ubuntu 10.10 and now that I use 11.04 dual booted with Linux Mint 10. Everytime I press one key repeatedly, like when I read a long article in a website/ebook or when I play games which required me to press arrow keys repeatedly, it would randomly freeze. That is, whatever I press on keyboard has no effect, and that also happens with touchpad. However, the USB mouse works just fine. I later found out that it's not actually freeze but more like it's like the key stuck. For example when I play tetris which I usually press w (down) button repeatedly, after some times it would freeze. And if I put the cursor in, say, browser's address bar, it would type "wwww....." infinitely. The only way I could fix it is by suspend the laptop, either by using mouse or by closing the lid. And instead of suspended, in that case the laptop would automatically wake up and everything will be fine. (Usually my laptop would wake up after suspended by pressing any key) It has happened since the first time I use Ubuntu, 10.10, and it also happens in Linux Mint 10, and until now in Ubuntu 11.04. It never happened when I use Windows, though. Anyone has ever encounter similar problem? Anyone know how to fix it permanently?

    Read the article

  • What's the easiest way to migrate one Mac OS X volume to another

    - by teabot
    I want to move a volume from a smaller drive to a larger unformatted one. What is the best way to achieve this? Ideally I'd like the new volume to have the same name as the older volume as it contains user accounts, and is a destination of various symlinks that I have on other volumes. Update: I used Carbon Copy Cloner in the end and it worked perfectly. I was able to simply rename the new volume in Finder to the same name as the old volume and then powered down and removed the old drive on which the volume lived. When I restarted, the new volume seamlessly worked in place of the old volume.

    Read the article

  • Upstart: best way for shutdown hook?

    - by Binarus
    Hi, since Ubuntu relies on upstart for some time now, I would like to use an upstart job to gracefully shutdown certain applications on system shutdown or reboot. It is essential that the system's shutdown or reboot is stalled until these applications are shut down. The applications will be started manually on occasion, and on system shutdown should automatically be ended by a script (which I already have). Since the applications can't be ended reliably without (nearly all) other services running, ending the applications has to be done before the rest of the shutdown begins. I think I can solve this by an upstart job which will be triggered on shutdown, but I am unsure which events I should use in which manner. So far, I have read the following (partly contradicting) statements: There is no general shutdown event in upstart Use a stanza like "start on starting shutdown" in the job definition Use a stanza like "start on runlevel [06S]" in the job definition Use a stanza like "start on starting runlevel [06S]" in the job definition Use a stanza like "start on stopping runlevel [!06S]" in the job definition From these recommendations, the following questions arise: Is there or is there not a general shutdown event in Ubuntu's upstart? What is the recommended way to implement a "shutdown hook"? When are the events runlevel [x] triggered; is this when having entered the runlevel or when entering the runlevel? Can we use something like "start on starting runlevel [x]" or "start on stopping runlevel [x]"? What would be the best solution for my problem? Thank you very much, Binarus

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • Experience with AMCC 3ware 9650se raid cards? Ours seems dead

    - by antiduh
    We have a 8-port 3ware 9650se raid card for our main disk array. We had to bring the server down for a pending power outage, and when we turned the machine back on, the raid card never started. This card has been in service for a couple years without problems, and was working up until the shutdown. Now, when we turn the machine on, the bios option rom that normally kicks in before the bootloader doesn't show up, none of the drives start, and when the OS tries to access the device, it just times out. The firmware on it has been upgraded in the past, so it's possible we've hit some sort of firmware bug. We're using it in a Silicon Mechanics R272 machine with gentoo for the OS. The OS eventually boots, but alas, without the card. We've ordered a new one, but I'm worried that if we replace the card it won't recognize the existing array. Has anybody performed a card swap before? Any help would be greatly appreciated.

    Read the article

  • How important is knowing functionality before coding?

    - by minusSeven
    I work for a software development company where the development work have been off shored to us. The on shore team handle the support and talk directly to the clients. We never talk to the clients directly we just talk people from the on shore team who talk directly to the clients. When requirements come, on shore team talk to the clients and make requirement documents and informs us. We make design documents after studying the requirements (we follow traditional waterfall model ). But there is one problem in the whole process: nobody in the either off-shore or on-shore understand the functionality of the application completely. We just know its a big complex web app handling complex order processing, catalog management, campaign management and other activities. We struggle with the design document as the requirements would not be clear. It then goes into a series of questions/answers back and forth between the on shore team,off shore team and clients. We would often be told to understand functionality from the code. But that's usually not feasible as the code base is huge and even understanding a simple menu item take days if not weeks. We tried telling the clients to give us knowledge transfer about the application but to no avail. Our manager would often tell us to start coding even if the design document is not complete or requirements not clear. We would start by coding part of the requirement that seems clear and wait for the rest. This usually would delay the deployment by a month. In extreme cases we would have very low errors in the development and production but the clients would say that's not what they asked. That would start a blame game and a series of change requests and we would end up developing something very different. My question is how would you do development work if you don't know the functionality of the app fully? UPDATE About development methodology it isn't really my choice and I am not my team's lead It is the way it began. I tried to tell people about the advantages of agile but to no avail. Besides I don't think my team has the necessary mindset to work in AGILE environment.

    Read the article

  • Multitenant Design for SQL Azure: White Paper Available

    - by Herve Roggero
    Cloud computing is about scaling out all your application tiers, from web application to the database layer. In fact, the whole promise of Azure is to pay for just what you need. You need more IIS servers? No problemo... just spin another web server. You expect to double your storage needs for Azure Tables? No problemo; you are covered there too... just pay for your storage needs. But what about the database tier, SQL Azure? How do you add new databases easily, and transparently, so that your application simply uses more of SQL Azure if its needs to? Without changing a single line of code? And what if you need to scale back down? Welcome to the world of database scalability. There are many terms that describe database scalability, including data federation, multitenant designs, and even NoSQL depending on the technical solution you are implementing.  Because SQL Azure is a transactional database system, NoSQL is not really an option. However data federation and multitenant designs offer some very interesting scalability options that are worth considering. Data federation, a feature of SQL Azure that will be offered in the future, offers very interesting capabilities available natively on the SQL Azure platform. More to come in a few weeks... Multitenant designs on the other hand are design practices and technologies designed to help you reach flexible scalability options not available otherwise. The first incarnation of such a method was made available on CodePlex as an open source project (http://enzosqlshard.codeplex.com).  This project was an attempt to provide a sharding library for educational purposes.  All that sounds really cool... and really esoteric... almost a form of database "voodoo"... However after being on multiple Azure projects I am starting to see a real need. Customers want to be able to free themselves from the database tier, so that if they have 10 new customers tomorrow, all they need to do is add 2 more SQL Azure instances. It's that simple. How you achieve this, and suggested application design guidelines, are available in a white paper I just published.  The white paper offers two primary sections. The first section describes the business and technical problem at hand, and how to classify it according to specific design patterns. For example, I discuss compressed shards through schema separation. The second section offers a method for addressing the needs of a multitenant design using a new library, the big bother of the codeplex project mentioned previously (that I created earlier this year), complete with management interface and such. A Beta of this platform will be made available within weeks; as soon as the documentation will be ready.   I would like to ask you to drop me a quick email at [email protected] if you are going to download the white paper. It's not required, but it would help me get in touch with you for feedback.  You can download this white paper here:   http://www.bluesyntax.net/files/EnzoFramework.pdf . Thank you, and I am looking for feedback, thoughts and implementation opportunities.

    Read the article

  • Issues with touch buttons in XNA (Release state to be precise)

    - by Aditya
    I am trying to make touch buttons in WP8 with all the states (Pressed, Released, Moved), but the TouchLocationState.Released is not working. Here's my code: Class variables: bool touching = false; int touchID; Button tempButton; Button is a separate class with a method to switch states when touched. The Update method contains the following code: TouchCollection touchCollection = TouchPanel.GetState(); if (!touching && touchCollection.Count > 0) { touching = true; foreach (TouchLocation location in touchCollection) { for (int i = 0; i < menuButtons.Count; i++) { touchID = location.Id; // store the ID of current touch Point touchLocation = new Point((int)location.Position.X, (int)location.Position.Y); // create a point Button button = menuButtons[i]; if (GetMenuEntryHitBounds(button).Contains(touchLocation)) // a method which returns a rectangle. { button.SwitchState(true); // change the button state tempButton = button; // store the pressed button for accessing later } } } } else if (touchCollection.Count == 0) // clears the state of all buttons if no touch is detected { touching = false; for (int i = 0; i < menuButtons.Count; i++) { Button button = menuButtons[i]; button.SwitchState(false); } } menuButtons is a list of buttons on the menu. A separate loop (within the Update method) after the touched variable is true if (touching) { TouchLocation location; TouchLocation prevLocation; if (touchCollection.FindById(touchID, out location)) { if (location.TryGetPreviousLocation(out prevLocation)) { Point point = new Point((int)location.Position.X, (int)location.Position.Y); if (prevLocation.State == TouchLocationState.Pressed && location.State == TouchLocationState.Released) { if (GetMenuEntryHitBounds(tempButton).Contains(point)) // Execute the button action. I removed the excess } } } } The code for switching the button state is working fine but the code where I want to trigger the action is not. location.State == TouchLocationState.Released mostly ends up being false. (Even after I release the touch, it has a value of TouchLocationState.Moved) And what is more irritating is that it sometimes works! I am really confused and stuck for days now. Is this the right way? If yes then where am I going wrong? Or is there some other more effective way to do this? PS: I also posted this question on stack overflow then realized this question is more appropriate in gamedev. Sorry if it counts as being redundant.

    Read the article

  • OpenGL: Want to keep gun on top of car and be able to control angle. Having difficulties.

    - by Blair
    So I am making a simple game. I want to put a gun on top of a car so basically like a long rod in the middle of a black is how I am modelling it right now. I want to be able to control the angle of the gun. Basically it can go forward all the way so that it is parallel to the ground facing the direction the car is moving or it can point behind the car and any of the angles in between these positions. I have something like the following right now but its not really working. Is there an better way to do this that I am not seeing? #This will place the car glPushMatrix() glTranslatef(self.position.x,1.5,self.position.z) glRotated(self.rotation, 0.0, 1.0, 0.0) glScaled(0.5, 0.5, 0.5) glCallList(self.model.gl_list) glPopMatrix() #This will place the gun on top glPushMatrix() glTranslatef(self.position.x,2.5,self.position.z) glRotated(self.tube_angle, self.direction.z, 0.0, self.direction.x) print self.direction.z glRotated(45, self.position.z, 0.0, self.position.x) glScaled(1.0, 0.5, 1.0) glCallList(self.tube.gl_list) glPopMatrix() This almost works. It moves the gun up and down. But when the car moves around the angle of the gun changes. Not what I want.

    Read the article

  • How to stop Windows 7 from applying patches on shutdown

    - by Stabledog
    I have my Windows 7 Pro set up to "download patches, but let me choose when to install them". However, on several occasions, when I have shut down the O/S, Windows Update has proceeded with a lengthy patch application even though I issued no permission to do so. This is a bit scary to me... in particular, it seems I cannot trust the Windows Update settings. Is this official policy somewhere at Microsoft, or am I witnessing a bug? What can be done about it?

    Read the article

  • How to install software packages on a shared Red Hat Linux host account without root access or rpm?

    - by jeff
    I have a shared RHEL 4 host account where I do not have root privileges. I would like to install Git and Bash Complete in a way that they can be upgraded easily. To date, I've just been installing from source providing $HOME as a prefix to autoconf. Obviously this isn't ideal as I need to hunt down the files associated with the version I'm upgrading away from and delete them. I've tried using rpm but I just get -bash: rpm: command not found back so it's not available. I also looked into checkinstall but it looks like that requires rpm, dpkg, or Slackware's package manager to be available. Is there anything out there that can be used like a package manager without requiring root access or an existing package manager?

    Read the article

  • Good Laptop .NET Developer VM Setup

    - by Steve Brouillard
    I was torn between putting this question on this site or SuperUsers. I've tried to do a good bit of searching on this, and while I find plenty of info on why to go with a VM or not, there isn't much practical advise on HOW to best set things up. Here's what I currently HAVE: HP EliteBook 1540, quad-core, 8GB memory, 500GB 7200 RPM HD, eSATA port. Descent machine. Should work just fine. Windows 7 64-bit Host OS. This also acts as my day-to-day basic stuff (email, Word Docs, etc...) OS. VMWare Desktop Windows 7 64-bit Guest OS with all my .NET dev tools, frameworks, etc loaded on it. It's configured to use 2 cores and up to 6GB of memory. I figure that the dev env will need more than email, word, etc... So, this seemed like a good option to me, but I find with the VM running, things tend to slow down all around on both the host and guest OS. Memory and CPU utilization don't seem to be an issue, but I/O does. I tried running the VM on an external eSATA drive, figuring that the extra channel might pick up the slack. Things only got worse (could be my eSATA enclosure). So, for all of that I have basically two questions in one. Has anyone used this sort of setup and are there any gotchas either around the VMWare configuration or anything else I may have missed here that you can point me to? Is there another option that might work better? For example, I've considered trying a lighter weight Host OS and run both of my environments as VMs? I tried this with Server 2008 Hyper-V, but I lose too much laptop functionality going this route, so I never completed setup. I'm not averse to Linux as a host OS, though I'm no Linux expert. If I'm missing any critical info, feel free to ask. Thanks in advance for your help. Steve

    Read the article

  • Cannot seem to disable ability to view temporary internet files via group policy

    - by user162707
    Windows XP Pro SP3, IE8 (8.0.6001.18702), within local gpedit.msc I did the below: User Config/Admin Temp/Windows Comp/IE enabled: disable changing temporary internet file settings User Config/Admin Temp/Windows Comp/IE/Delete Browsing History enabled all (11 items) However there is a loophole that lets me still wipe history & other files via: Tools, Internet Options, Browsing History, Settings, View Objects, delete everything, hit up arrow, go to History (hidden folders has to be on), delete everything Only way around this I can see is to disable General Internet Options Page via group policy, setup NTFS folder restrictions on that temp internet files (worried about adverse affects like not being able to store them), or further grind-down group policy somewhere else to prevent deleting files. Just odd group policy wouldn't have a settings to simply disable the Browser History Settings button (as it further shows the location which a user could just go to). So just curious if someone can confirm maybe this is simply not available in group policy & their suggested action

    Read the article

  • Unable to open Infopath2007 files in Outlook 2010

    - by Amy
    Our company recently began upgrading selected users to Outlook 2010, however we all still remain on Infopath 2007. Everything seems to be working fine for our users going from Infopath 2007 to Outlook 10. Where we are running into the problem is for our users who are on Outlook 10 talking to other users that are also on Outlook 10. When any user opens an Infopath file from a shared site, completes and submits it, and then choses to reply to it, our Outlook 10 folks can not open the emails. They pop open for just a second and close down. It also appears in their email list with a different icon. Any ideas on how to get our Outlook 2010 users to see all of their infopath emails?

    Read the article

  • How to automatically save sessions with multiple windows in FireFox

    - by Matthew Talbert
    I've used primarily FF's built-in session management until recently. Now my needs have become more sophisticated. What I want is to be able to have two windows, one with a fixed set of tabs (approximately 5) and the other with "automatic save". That is, when I start FF, I want 1 window to open with my 5 tabs, and another to open with whatever I had when I shut down FF. I've installed "Session Manager", but I can't seem to get it to do what I want. It will save one window, but when I close one window, it removes that one from the session. Any suggestions to do this with either Session Manager or another plugin would be great.

    Read the article

  • Building a Redundant / Distrubuted Application

    - by MattW
    This is more of a "point me in the right direction" question. I (and my team of 3) have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Repurpose Old Phones As Intercoms

    - by Jason Fitzpatrick
    If you’ve got some old wired telephones laying around for want of a project, this simple hack turns two wired phones into an intercom. Over at Hack A Day, Caleb Kraft shares his simple phone hack inspired by his VW bus. He writes: In case you haven’t noticed from my many comments on the subject, I drive a VW bus. It is a 1976 Westfalia camper with sage green paint and green plaid upholstery. I absolutely love it and so does the rest of my family. We go for drives in the country as well as camping regularly. We have found that the kids have a hard time communicating with us while we’re going higher speeds. These things aren’t the quietest automobiles in the world. Pushing this bread loaf shaped hunk of steel down the road with an engine that might top out at 75hp results in wind noise, engine noise, and of course, vibration. I decided to employ a really old hack to put two functional telephones in the bus so my kids can talk to my wife (or whoever the passenger is) without screaming quite so loud. This hack is extremely easy, fairly cheap, and can be done in just a few minutes. The result is a functional intercom that you could use pretty much anywhere! For more pics of his setup (and a neat video of his rather retro ride), check out the link below. Hack Your Kindle for Easy Font Customization HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It

    Read the article

  • Vlan and Switch setting - dummy

    - by Andras Sebestyen
    I need to speak tomorrow the network engineer and I would like to understand his so apologise for the Dummy question: In the school we have a cab with a 24 port Netgear manageable switch with an admin and curriculum VLAN settings. Usually, as I over heard, in the morning and around 4.30pm there are a slowdown period on the computers which connect to this switch. No one could track this back yet. Questions: What is the best way to track back this slowdown Would it be a temporary solution to physical separate the two network with 2 switches If that would work how can I link them together to be able to see the curriculum from the admin side. Do I need an extra router then? Too many questions but I have no clue where to start and the gentleman will be paid by hours... can you see where I am coming from?:) Could you guide me in the right direction please? Any comment would be appreciated and please send links if you down vote the question:)

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Is a cluster the most cost effective redundancy method for windows server 2003?

    - by Ryan
    We had a server with bad ram which caused a long outage while they figured it out and our client facing apps had to go down for a while. We are coming up with a solution for instant fail-over but are not sure what the most cost effective method would be. Is a windows server cluster the best method for this? Also note we are using Parallels Virtuozzo if that makes any difference here. We found Parallels has a documented method for setting this up but it said it required a Domain Controller as well as a Fiber connection to shared storage, is all that really needed? Thanks.

    Read the article

< Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >