Search Results

Search found 71953 results on 2879 pages for 'work environment'.

Page 312/2879 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • How do you QA and release software quickly (some call it agile) with a large team?

    - by sadadasd
    My work used to be a smaller team. We had less than 13 devs for a while. We are now growing rapidly, and are over 20 with plans to be over 30 in a few months (triple dev size!!!) Our process for QA'ing and releasing each build is no longer working. We currently have everyone develop the new code, and stick it onto a staging environment. A few days before our weekly release, we would freeze the staging environement and QA everything new / old. By our normal release time everything was usually deemed acceptable and pushed out the door to the main site. We reached a point where our code got too big so we could no longer regress the entire site each week in QA. We were ok with that, we jsut made a list of everything important and only covered that and the new stuff. Now we are reaching a point where all the new stuff each week is becoming too big and too unstable. Our staging environment is really buggy week after week, and we are usually 1-2 hrs behind the normal release time. As the team is growing further, we are going to drown with this same process. We are re-evaluating everything, and I personally am looking for suggestions / success stories. Many companies have been where before and progressed beyond, we need to do the same

    Read the article

  • Adventures in Lab Management Configuration: CMMI Edition Part 1 of 3

    - by Enrique Lima
    I remember at one point someone telling me how close Migrate was to Migraine. This was a process that included an environment from TFS 2008 to TFS 2010, needed to be migrated too as far as the process template goes.  Here we are talking about CMMI v4.2 to CMMI v5.0.  Now, the process to migrate the TFS Infrastructure is one thing, migrating the Process Template is a different deal, not hard … just involved. Followed a combination of steps that came from a blog post as the main guidance and then MSDN (as suggested on the guidance post) to complement some tasks and steps. Again, the focus I have here is CMMI. The high level steps taken to enable the TFS 2008 CMMI v4.2 migrated to TFS 2010 Process Template are: 1)  Backup the Collection, Configuration and Warehouse Databases. 2)  Downloaded the Process Template using Visual Studio 2010. 3) Exported, modified and imported Bug Type Definition 4) Exported, modified and imported Scenario or Requirement Type Definition. 5) Created and imported bug field mappings. Now, we can attempt to connect using Test Manager, and you should be able to get this going. After that was done, it was time to enroll VMs that already existed in the environment.  This was a bit more challenging, but in the end it was a matter of just analyzing the changes that had been made to had a temporary work around from the time we migrated to the time we converted the Work Items and such and added fields to enable communication between the project and the Test and Lab Manager component. There are 2 more parts to this post, the second will describe the detailed steps taken to complete the Process Template update and the third will talk about the gotchas and fixes for the Lab Management portion.

    Read the article

  • Office 2010 Client &ndash; Should I go with 32 bit or 64 bit?

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). As you know, Office 2010 client now comes in both 32 bit and 64 bit versions. The question is, should you go with 32 bit or 64 bit? 64 is bigger than 32 .. so 64 is better no? NO! Given a choice, or unless you have a very strong reason not to – GO WITH 32 bit. Why is that? Here is why - 32 bit apps actually work better on 64 bit OS’s in most scenarios due to WoW, and the additional 64 bit VLSW calculations. If you have 2007 installations to support, SharePoint designer 2010 cannot be used to work with SharePoint 2007 sites. So you will have to install SharePoint designer 2007 32bit side by side with SharePoint designer 2010 32 bit side by side. So you cannot mix and match 32 bit and 64 bit here. Of course you can virtualize and not have this problem to begin with :-D. 64 bit office will break many things on your SharePoint experience for that client – example, that fancy datasheet view won’t work on lists. 32 bit office apps don’t have this issue. There are some extreme situations where you DO want 64 bit client apps though. Specifically if you have HUGE excel sheets to work with, then 64 bit office client excel is much better than the equivalent 32 bit excel. Comment on the article ....

    Read the article

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

  • Working as a contractor--questions part II

    - by universe_hacker
    This is a follow-up to this discussion: Working as a contractor--questions I still would like to get more info on the following points, when working as a contractor as opposed to direct employee: Housing: for short-term contracts (let's say 6 months or less) that are not in your home area, where and how do you search for short-term, flexible housing? Especially since an employer would typically want you to start immediately, so you don't have the time to go out and explore. Also, would you typically look for furnished apartments because the cost transporting your own furniture for a few months is not justified? Work hours and pay: are contractors more strictly supervised (as far as getting specified work done) because they get paid by the hour? There are also supposed to get overtime pay (at a higher rate) if they work more than 40 hours per week, does this really happen? Or do they work unpaid extra hours just like many regular employees? Some potential employers have mentioned paying the "per diem", which is essentially a non-taxable daily allowance, which is supposed to be used for living expenses. This money gets subtracted from you per-hour rate, but the advantage is that you pay less tax. However, from the information I have seen, the per diem can only be paid if you maintain a "permanent" residence you intend to return to. Is this checked in practice, and if yes, how?

    Read the article

  • Business Forces: SOA Adoption

    The only constant in today’s business environment is change. Businesses that continuously foresee change and adapt quickly will gain market share and increased growth. In our ever growing global business environment change is driven by data in regards to collecting, maintaining, verifying and distributing data.  Companies today are made and broken over data. Would anyone still use Google if they did not have one of the most accurate search indexes on the internet? No, because their value is in their data and the quality of their data. Due to the increasing focus on data, companies have been adopting new methodologies for gaining more control over their data while attempting to reduce the costs of maintaining it. In addition, companies are also trying to reduce the time it takes to analyze data in regards to various market forces to foresee changes prior to them actually occurring.   Benefits of Adopting SOA Services can be maintained separately from other services and applications so that a change in one service will only affect itself and client services or applications. The advent of services allows for system functionality to be distributed across a network or multiple networks. The costs associated with maintain business functionality is much higher in standard application development over SOA due to the fact that one Services can be maintained and shared to other applications instead of multiple instances of business functionality being duplicated via hard coding in to several applications. When multiple applications use a single service for a specific business function then the all of the data being processed will be consistent in terms of quality and accuracy through the applications. Disadvantages of Adopting SOA Increased initial costs and timelines are associated with SOA due to the fact that services need to be created as well as applications need to be modified to call the services In order for an SOA project to be successful the project must obtain company and management support in order to gain the proper exposure, funding, and attention. If SOA is new to a company they must also support the proper training in order for the project to be designed, and implemented correctly. References: Tews, R. (2007). Beyond IT: Exploring the Business Value of SOA. SOA Magazine Issue XI.

    Read the article

  • Looking for an example of how a software project can be managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is understanding how to move code between stages of development. We have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. This movement of code is triggered by advancing the status of the change request to match the stage of the pipeline. I have been searching the internet for a few days trying to find how the process is accomplished elsewhere -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources detailing specific workflows or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. Note: I've cleaned up the question to hopefully make it easier to understand. Also, I found another question (which I can't find now) that referenced this book, which sounds like it might be exactly what I am looking for -- not sure if I want to shell out the cash for it, though.

    Read the article

  • Can I upgrade my ubuntu version and change to be primary OS after originally installing with Wibu?

    - by Garrick Wann
    I have recently installed Ubuntu 12.04 using Wubi 12.04 and I now wish to upgrade to a full installation of Ubuntu 14.04, Before attempting to upgrade through the update center I did some research on upgrading from a Wubi installation (alongside windows) to a full installation making Ubuntu primary and only OS and found that it is in fact doable through the update center however it is just highly recommended to perform a full backup before doing so. I have now finished backing up all the data I need to worry about and began the upgrade process through the update center and received the following error: Your graphics hardware may not be fully supported in Ubuntu 14.04. Running the 'unity' desktop environment is not fully supported by your graphics hardware. You will maybe end up in a very slow environment after the upgrade. Our advice is to keep the LTS version for now. For more information see https://wiki.ubuntu.com/X/Bugs/UpdateManagerWarningForUnity3D Do you still want to continue with the upgrade? My questions are as follows: A. Isnt 14.04 a LTS version??? B, What are your recomendations in order to ensure my graphics driver is installed correctly and im not stuck with bad configs/install?

    Read the article

  • Getting the total number of processors a computer has (c#)

    - by mbcrump
    Here is a code snippet for getting the total number of processors a computer has without using Environment.ProcessorCount. I found out that Environment.ProcessorCount is not necessary returning the correct value on some Intel based CPU’s.   using System; usingSystem.Collections.Generic; usingSystem.Linq; usingSystem.Text; usingSystem.Globalization; usingSystem.Runtime.InteropServices; namespaceConsoleApplication4 {     classProgram    {         static voidMain(string[] args)         {             int c = ProcessorCount;             Console.WriteLine("The computer has {0} processors", c);             Console.ReadLine();         }         private static classNativeMethods        {             [StructLayout(LayoutKind.Sequential)]             internal struct SYSTEM_INFO            {                 public ushort wProcessorArchitecture;                 public ushort wReserved;                 public uint dwPageSize;                 publicIntPtr lpMinimumApplicationAddress;                 publicIntPtr lpMaximumApplicationAddress;                 publicUIntPtr dwActiveProcessorMask;                 public uint dwNumberOfProcessors;                 public uint dwProcessorType;                 public uint dwAllocationGranularity;                 public ushort wProcessorLevel;                 public ushort wProcessorRevision;             }             [DllImport("kernel32.dll", CharSet = CharSet.Auto, ExactSpelling = true)]             internal static extern voidGetNativeSystemInfo(refSYSTEM_INFOlpSystemInfo);         }         public static int ProcessorCount         {             get            {                 NativeMethods.SYSTEM_INFOlpSystemInfo = newNativeMethods.SYSTEM_INFO();                 NativeMethods.GetNativeSystemInfo(reflpSystemInfo);                 return(int)lpSystemInfo.dwNumberOfProcessors;             }         }     } }

    Read the article

  • Ubuntu 14.04 LTS 64-bit install alongside Windows 7

    - by user289222
    I've tried installing Ubuntu 14.04 LTS alongside my Windows 7 OS, following the exact procedure given by the Ubuntu website and random other tutorials. I've tried with a LiveCD and with a USB stick but I always run into the same problem. When I'm at the screen where I'm allowed to select how I want to install Ubuntu ("alongside", "erase Windows 7", "something else"), the first option says "Install Ubuntu inside Windows 7" instead of "Install Ubuntu alongside Windows 7". From pretty much all tutorials I've seen, the tutorial says that the option should say "alongside". I click "inside" anyway, and Ubuntu doesn't install at all. Instead, my computer just reboots, and goes back to the Try Ubuntu or Install Now screen. This happens regardless of using a LiveCD or a USB stick. I've also tried manually resizing my partitions using "something else". Oddly, I see 4 sda partitions: /dev/sda type size used /dev/sda1 1mb unknown Windows 7 (loader) /dev/sda2 ntsf 208mb unknown Recovery Windows Environment (loader) /dev/sda3 ntsf ~752000mb unknown Recovery Windows Environment (loader) /dev/sda4 ~18000mb unknown I try resizing the largest partition, but some sort of internal error occurs and it doesn't let me resize my partitions. Any ideas on what's going on and how to solve it?

    Read the article

  • How to fix Sketchup in Wine when tool starts, but displays empty workspace?

    - by Chaos_99
    I've installed wine 1.6 and winetricks in an Linux Mint 15 system, then downloaded the latest Sketchup2013 'Make' Windows-Installer and installed through wine. I've prepared the wine environment with starting as WINEARCH=win32, installed corefonts and ie8 and enabled the override for the 'riched20' libraries. (I've no idea what the last bit does, but it was advised in some guides.) I've also tried without these steps. Only the win32 seems to make a difference, as the installer will complain about not finding SP2 otherwise. Sketchup is installed successfully and starts, but displays an empty viewport. The program is responsive and everything works, it's just that you can't see anything. I don't get any OpenGL error and the registry entries seem fine, according to the OpenGL issue workarounds floating around the net. I still think it has something to do with OpenGL not working properly, maybe not in the wine environment, but in the linux system? I'm running on a Lenovo W520 with Nvida/Intel hybrid cards, but only the NVida card is active and the properitary nvidia (319) drivers are installed. GLXGears runs fine, but clamps at 2x the refresh rate. glxinfo outputs direct rendering: Yes server glx vendor string: NVIDIA Corporation server glx version string: 1.4 I'm willing to try any linux or wine OpenGL tests to narrow down the problem, if you can offer any advise on what to use.

    Read the article

  • Web Applications Desktop Integrator (WebADI) Feature for Install Base Mass Update in 12.1.3

    - by LuciaC
    Purpose The integration of WebADI technology with the Install Base Mass Update function is designed to make creation and update of bulk item instances much easier than in the past. What is it? WebADI is an Excel-based desktop application where users can download an Excel template with item instances pre-populated based on search criteria.  Users can create and update item instances in the Excel sheet and finally upload the Excel data using an "upload" option available in the Excel menu. On upload, the modified data will bulk upload to interface tables which are processed by an asynchronous concurrent program that users can monitor for the uploaded results. Advantage: This allows users to work in a disconnected Environment: session time outs can be avoided, as once a template is downloaded the user can work in a disconnected environment and once all updates are done the new input can be uploaded. Also the data can be saved for later update and upload. For more details review the following: R12.1.3 Install Base WebADI Mass Update Feature (Doc ID 1535936.1) How To Use Install Base WebADI Mass Update Feature In Release 12.1.3 (Doc ID 1536498.1).

    Read the article

  • Alternatives to time tracking methodologies [closed]

    - by Brandon Wamboldt
    Question first: What are some feasible alternatives to time tracking for employees in a web/software development company, and why are they better options Explanation: I work at a company where we work like this. Everybody is paid salary. We have 3 types of work, Contract, Adhoc and Internal (Non billable). Adhoc is just small changes that take a few hours and we just bill the client at the end of the month. Contracts are signed and we have this big long process, the usual. We figure out how much to charge by getting an estimation of the time involved (From the design and the developers), multiplying it by our hourly rate and that's it. So say we estimate 50 hours for a website. We have time tracking software and have to record the time in 15 we spend on it (7:00 to 7:15 for example), the project name, and give it some comments. Now if we go over the 50 hours, we are both losing money and are inefficient. Now that I've explained how the system works, my question is how else can it be done if a better method exists (Which I'm sure one must). Nobody here likes the current system, we just can't find an alternative. I'd be more than willing to work after hours longer hours on a project to get it done in time, but I'm much inclined to do so with the current system. I'd love to be able to sum up (Or link) to this post for my manager to show them why we should use abc system instead of this system.

    Read the article

  • Case studies for successful service (project) based software development businesses without constant overtime from its employees [closed]

    - by Ryan Taylor
    I work for an IT company that is primarily services (project) based rather than product based. All software engineers are salaried. The company has set new expectations that everyone should work 48 hours per week instead of 40. Note, this isn't occasional overtime due to crunches. This is the new 40. The reasoning is that this enables the company to provide benefits to its employees such as monetary incentives and training because the company is more profitable. more hours worked = more billable hours = larger profit I understand the need for profitability and the occasional crunch time and have put in the extra hours when it was needed and beneficial to the project. However, I am also very sensitive to work life balance and have raised my concerns about the the new expectation. My employer is open to other methods to increase profitability so I hold hope that we can turn things around before it becomes a horrible place to work. How does a services based company become more profitable without increasing the number of hours expected from it's salaried employees? Are there any case studies showing the pros and cons of consistent overtime? Are there any case studies for a successful service based business model (for software development companies) that does not require consistent overtime from its employees?

    Read the article

  • So You Want To Build a SPARC Cloud

    - by user12601629
    Did you ever wish you could get the industrial strength power of UNIX/RISC with the flexibility of cloud computing?  Well, now you can!  With recent advances from Oracle it's possible to build an incredibly high-performance, flexible, available virtualized infrastructure based on Solaris and SPARC.  Here's the recipe! Authored in collaboration across the Oracle "Systems Group" team, we now have a complete best practice guide for you.  Click below to download it: Best Practices for Building a Virtualized SPARC Computing Environment Inside you'll find recommendations for how and when to leverage technologies like: SPARC T4 OVM for SPARC hypervisor (version 2.2 and newer) Solaris 11 Ops Center 12c ZFS Storage Appliance Oracle network switches Based on following these best practices, you'll be able to construct a dynamic, virtualized infrastructure that allows for: Easy, GUI-based provisioning on new VMs Automated HA failover in the event of physical server failures Automatic load balancing across a cluster of VM hosts Complete end-to-end monitoring You should download this paper and check it out.  Even if you aren't planning on buying all new hardware, and instead want to transform some existing gear into a dynamic virtualized environment then this paper will give you concrete info on what to do and the trade-offs you'll make. Have fun getting started on your journey to build a SPARC cloud!

    Read the article

  • How do I configure upstart to run a script on shutdown when the process takes longer than 10secs?

    - by Tiris
    I am running ubuntu 11.10 in a virtual machine (VirtualBox) to learn more about development in linux. I am using a git repository to save my work and have written a script to bundle my work up and save it to the shared folder for use while the virtual machine is not running. I would like to automatically run this script before shutdown so that my work is always available if the vm is off (currently I have to manually run the script). I don't know if upstart is the best way to accomplish this, but this is the config that I wrote as a test: description "test script to run at shutdown" start on runlevel [056] task script touch /media/sf_LinuxEducation/start sleep 15 touch /media/sf_LinuxEducation/start-long end script pre-start script touch /media/sf_LinuxEducation/pre-start sleep 15 touch /media/sf_LinuxEducation/pre-start-long end script post-start script touch /media/sf_LinuxEducation/post-start sleep 15 touch /media/sf_LinuxEducation/post-start-long end script pre-stop script touch /media/sf_LinuxEducation/pre-stop sleep 15 touch /media/sf_LinuxEducation/pre-stop-long end script post-stop script touch /media/sf_LinuxEducation/post-stop sleep 15 touch /media/sf_LinuxEducation/post-stop-long end script The result is that only one touch is accomplished (the first touch in pre-start). What do I need to change to see one of the touches after the sleep to work? Or Is there an easier way to get this accomplished? Thanks in advance.

    Read the article

  • Windows Server or Linux for final project

    - by user1433490
    A few weeks ago I came up with an idea to develop a mobile app which will direct students in my university to the nearest printer availiable. The whole thing is part of my final project. The Android based app will need to perform the following tasks: The user's location in the campus is sent to the server. Assume this part works just fine. The server sends an SNMP request to the printers in the user's vicinity. I'll probably use PHP or Python for that part. The data requested by SNMP is processed and sent back to the client My question concerns the server. The university's IT manager offered me a designated server for development, which sounds great. Now I need to choose which OS I want installed on the server - Windows server or Linux (don't know which versions). I don't have any server programming/operating experince, but generally speaking I feel more comfortable in Windows enviroment (just because that has always been my OS). I don't have much time for learning a new OS, but when does it make sense generally to host or develop server side applications on a Windows environment versus a Linux environment?

    Read the article

  • DotNetOpenAuth OpenID Provider "Sequence contains more than one element"

    - by Matthew Johnson
    Hello, all, I'm having trouble implementing my OpenID provider with DNOA 3.4.3. Everything was going absolutely peachy until I needed AX support as well. I set AXFetchAsSregTransform in the web config, as recommended by Andrew at http://groups.google.com/group/dotnetopenid/browse_thread/thread/5629a24c0a7e8d99. Doing this caused me to get the exception "Sequence Contains More Than One Element" on my decide.aspx page, however, and I haven't been able to get past it. The following line is throwing the exception: Edit: Strangely enough, this is not the line throwing the error anymore. The SendResponse() is now triggering the exception ClaimsRequest requestedFields = ProviderEndpoint.PendingRequest.GetExtension(); ProviderEndpoint.SendResponse() Any thoughts on why this may be? Any help would be greatly appreciated! The logs leading up to the error are as follows: 2010-04-28 12:38:20,247 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/provider.ashx?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.mode=checkid_setup&openid.ns.ext1=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ext1.mode=fetch_request&openid.ext1.type.email=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ext1.type.fullname=http%3A%2F%2Faxschema.org%2FnamePerson&openid.ext1.type.language=http%3A%2F%2Faxschema.org%2Fpref%2Flanguage&openid.ext1.required=email&openid.return_to=http%3A%2F%2Fmyrelyingparty%2Flogin.jsp%3Foidreturn%3D%252Fhome&openid.assoc_handle=%7B634080802953194640%7D%7BHxjFNw==%7D%7B20%7D&openid.realm=http%3A%2F%2Fmyrelyingparty 2010-04-28 12:38:20,285 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Processing incoming CheckIdRequest (2.0) message: openid.claimed_id: http://specs.openid.net/auth/2.0/identifier_select openid.identity: http://specs.openid.net/auth/2.0/identifier_select openid.assoc_handle: {634080802953194640}{HxjFNw==}{20} openid.return_to: http://myrelyingparty/login.jsp?oidreturn=%2Fhome openid.realm: http://myrelyingparty/ openid.mode: checkid_setup openid.ns: http://specs.openid.net/auth/2.0 openid.ns.ext1: http://openid.net/srv/ax/1.0 openid.ext1.mode: fetch_request openid.ext1.type.email: http://axschema.org/contact/email openid.ext1.type.fullname: http://axschema.org/namePerson openid.ext1.type.language: http://axschema.org/pref/language openid.ext1.required: email 2010-04-28 12:38:22,773 (GMT-7) [14] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/login.aspx?ReturnUrl=%2fdecide.aspx 2010-04-28 12:38:36,167 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/login.aspx?ReturnUrl=%2fdecide.aspx 2010-04-28 12:38:38,147 (GMT-7) [14] ERROR DotNetOpenAuth.Messaging - Protocol error: An HTTP request to the realm URL (http://myrelyingparty/) resulted in a redirect, which is not allowed during relying party discovery. at DotNetOpenAuth.Messaging.ErrorUtilities.VerifyProtocol(Boolean condition, String message, Object[] args) at DotNetOpenAuth.OpenId.Realm.Discover(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) at DotNetOpenAuth.OpenId.Realm.DiscoverReturnToEndpoints(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverableCore(OpenIdProvider provider) at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverable(OpenIdProvider provider) at OpenIdProviderWebForms.decide.Page_Load(Object src, EventArgs e) at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.decide_aspx.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) at System.Web.HttpApplication.PipelineStepManager.ResumeSteps(Exception error) at System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) at System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) 2010-04-28 12:38:38,149 (GMT-7) [14] INFO DotNetOpenAuth.Yadis - Relying party discovery at URL http://myrelyingparty/ failed. DotNetOpenAuth.Messaging.ProtocolException: An HTTP request to the realm URL (http://myrelyingparty/) resulted in a redirect, which is not allowed during relying party discovery. at DotNetOpenAuth.Messaging.ErrorUtilities.VerifyProtocol(Boolean condition, String message, Object[] args) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\Messaging\ErrorUtilities.cs:line 235 at DotNetOpenAuth.OpenId.Realm.Discover(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Realm.cs:line 446 at DotNetOpenAuth.OpenId.Realm.DiscoverReturnToEndpoints(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Realm.cs:line 424 at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverableCore(OpenIdProvider provider) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\HostProcessedRequest.cs:line 142 2010-04-28 12:38:42,076 (GMT-7) [8] ERROR OpenIdProviderWebForms.Global - An unhandled exception was raised. Details follow: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.InvalidOperationException: Sequence contains more than one element at System.Linq.Enumerable.SingleOrDefault[TSource](IEnumerable`1 source) at DotNetOpenAuth.OpenId.Provider.Request.GetExtension[T]() in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\Request.cs:line 176 at DotNetOpenAuth.OpenId.Extensions.ExtensionsInteropHelper.ConvertSregToMatchRequest(IHostProcessedRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Extensions\ExtensionsInteropHelper.cs:line 180 at DotNetOpenAuth.OpenId.Behaviors.AXFetchAsSregTransform.DotNetOpenAuth.OpenId.Provider.IProviderBehavior.OnOutgoingResponse(IAuthenticationRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Behaviors\AXFetchAsSregTransform.cs:line 139 at DotNetOpenAuth.OpenId.Provider.OpenIdProvider.ApplyBehaviorsToResponse(IRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\OpenIdProvider.cs:line 482 at DotNetOpenAuth.OpenId.Provider.OpenIdProvider.SendResponse(IRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\OpenIdProvider.cs:line 325 at OpenIdProviderWebForms.decide.Yes_Click(Object sender, EventArgs e) in C:\Projects\OpenIdProviderWebForms\decide.aspx.cs:line 130 at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.decide_aspx.ProcessRequest(HttpContext context) in c:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\root\7f580b93\b3e4d917\App_Web_tulh9ymv.1.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

    Read the article

  • How do I use connect to DB2 with DBI and mod_perl?

    - by Matthew
    I'm having issues with getting DBI's IBM DB2 driver to work with mod_perl. My test script is: #!/usr/bin/perl use strict; use CGI; use Data::Dumper; use DBI; { my $q; my $dsn; my $username; my $password; my $sth; my $dbc; my $row; $q = CGI->new; print $q->header; print $q->start_html(); $dsn = "DBI:DB2:SAMPLE"; $username = "username"; $password = "password"; print "<pre>".$q->escapeHTML(Dumper(\%ENV))."</pre>"; $dbc = DBI->connect($dsn, $username, $password); $sth = $dbc->prepare("SELECT * FROM SOME_TABLE WHERE FIELD='SOMETHING'"); $sth->execute(); $row = $sth->fetchrow_hashref(); print "<pre>".$q->escapeHTML(Dumper($row))."</pre>"; print $q->end_html; } This script works as CGI but not under mod_perl. I get this error in apache's error log: DBD::DB2::dr connect warning: [unixODBC][Driver Manager]Data source name not found, and no default driver specified at /usr/lib/perl5/site_perl/5.8.8/Apache/DBI.pm line 190. DBI connect('SAMPLE','username',...) failed: [unixODBC][Driver Manager]Data source name not found, and no default driver specified at /data/www/perl/test.pl line 15 First of all, why is it using ODBC? The native DB2 driver is installed (hence it works as CGI). Running Apache 2.2.3, mod_perl 2.0.4 under RHEL5. This guy had the same problem as me: http://www.mail-archive.com/[email protected]/msg22909.html But I have no idea how he fixed it. What does mod_php4 have to do with mod_perl? Any help would be greatly appreciated, I'm having no luck with google. Update: As james2vegas pointed out, the problem has something to do with PHP: I disable PHP all together I get the a different error: Total Environment allocation failure! Did you set up your DB2 client environment? I believe this error is to do with environment variables not being set up correctly, namely DB2INSTANCE. However, I'm not able to turn off PHP to resolve this problem (I need it for some legacy applications). So I now have 2 questions: How can I fix the original issue without disabling PHP all together? How can I fix the environment issue? I've set DB2INSTANCE, DB2_PATH and SQLLIB variables correctly using SetEnv and PerlSetEnv in httpd.conf, but with no luck. Note: I've edited the code to determine if the problem was to do with Global Variable Persistence.

    Read the article

  • Phone-book Database Help - Python

    - by IDOntWantThat
    I'm new to programming and have an assignment I've been working at for awhile. I understand defining functions and a lot of the basics but I'm kind of running into a brick wall at this point. I'm trying to figure this one out and don't really understand how the 'class' feature works yet. I'd appreciate any help with this one; also any help with some python resources that have can dummy down how/why classes are used. You've been going to work on a database project at work for sometime now. Your boss encourages you to program the database in Python. You disagree, arguing that Python is not a database language but your boss persists by providing the source code below for a sample telephone database. He asks you to do two things: Evaluate the existing source code and extend it to make it useful for managers in the firm. (You do not need a GUI interface, just work on the database aspects: data entry and retrieval - of course you must get the program to run or properly work He wants you to critically evaluate Python as a database tool. Import the sample code below into the Python IDLE and enhance it, run it and debug it. Add features to make this a more realistic database tool by providing for easy data entry and retrieval. import shelve import string UNKNOWN = 0 HOME = 1 WORK = 2 FAX = 3 CELL = 4 class phoneentry: def __init__(self, name = 'Unknown', number = 'Unknown', type = UNKNOWN): self.name = name self.number = number self.type = type # create string representation def __repr__(self): return('%s:%d' % ( self.name, self.type )) # fuzzy compare or two items def __cmp__(self, that): this = string.lower(str(self)) that = string.lower(that) if string.find(this, that) >= 0: return(0) return(cmp(this, that)) def showtype(self): if self.type == UNKNOWN: return('Unknown') if self.type == HOME: return('Home') if self.type == WORK: return('Work') if self.type == FAX: return('Fax') if self.type == CELL: return('Cellular') class phonedb: def __init__(self, dbname = 'phonedata'): self.dbname = dbname; self.shelve = shelve.open(self.dbname); def __del__(self): self.shelve.close() self.shelve = None def add(self, name, number, type = HOME): e = phoneentry(name, number, type) self.shelve[str(e)] = e def lookup(self, string): list = [] for key in self.shelve.keys(): e = self.shelve[key] if cmp(e, string) == 0: list.append(e) return(list) # if not being loaded as a module, run a small test if __name__ == '__main__': foo = phonedb() foo.add('Sean Reifschneider', '970-555-1111', HOME) foo.add('Sean Reifschneider', '970-555-2222', CELL) foo.add('Evelyn Mitchell', '970-555-1111', HOME) print 'First lookup:' for entry in foo.lookup('reifsch'): print '%-40s %s (%s)' % ( entry.name, entry.number, entry.showtype() ) print print 'Second lookup:' for entry in foo.lookup('e'): print '%-40s %s (%s)' % ( entry.name, entry.number, entry.showtype() ) I'm not sure if I'm on the right track but here is what I have so far: def openPB(): foo = phonedb() print 'Please select an option:' print '1 - Lookup' print '2 - Add' print '3 - Delete' print '4 - Quit' entry=int(raw_input('>> ')) if entry==1: namelookup=raw_input('Please enter a name: ') for entry in foo.lookup(namelookup): print '%-40s %s (%s)' % (entry.name, entry.number, entry.showtype() ) elif entry==2: name=raw_input('Name: ') number=raw_input('Number: ') showtype=input('Type (UNKNOWN, HOME, WORK, FAX, CELL): \n>> ') for entry in foo.add(name, number, showtype): #Trying to figure out this part print '%-40s %s (%s)'% (entry.name, entry.number, entry.showtype() ) elif entry==3: delname=raw_input('Please enter a name to delete: ') # #Trying to figure out this part print "Contact '%s' has been deleted" (delname) elif entry==4: print "Phone book is now closed" quit else: print "Your entry was not recognized." openPB() openPB()

    Read the article

  • Cannot Logout of Facebook with Facebook C# SDK

    - by Ryan Smyth
    I think I've read just about everything out there on the topic of logging out of Facebook inside of a Desktop application. Nothing so far works. Specifically, I would like to log the user out so that they can switch identities, e.g. People sharing a computer at home could then use the software with their own Facebook accounts, but with no chance to switch accounts, it's quite messy. (Have not yet tested switching Windows users accounts as that is simply far too much to ask of the end user and should not be necessary.) Now, I should say that I have set the application to use these permissions: string[] permissions = new string[] { "user_photos", "publish_stream", "offline_access" }; So, "offline_access" is included there. I do not know if this does/should affect logging out or not. Again, my purpose for logging out is merely to switch users. (If there's a better approach, please let me know.) The purported solutions seem to be: Use the JavaScript SDK (FB.logout()) Use "m.facebook.com" instead Create your own URL (and possibly use m.facebook.com) Create your own URL and use the session variable (in ASP.NET) The first is kind of silly. Why resort to JavaScript when you're using C#? It's kind of a step backwards and has a lot of additional overhead in a desktop application. (I have not tried this as it's simply disgustingly messy to do this in a desktop application.) If anyone can confirm that this is the only working method, please do so. I'm desperately trying to avoid it. The second doesn't work. Perhaps it worked in the past, but my umpteen attempts to get it to work have all failed. The third doesn't work. I've tried umpteen dozen variations with zero success. The last option there doesn't work for a desktop application because it's not ASP.NET and you don't have a session variable to work with. The Facebook C# SDK logout also no longer works. i.e. public FacebookLoginDialog(string appId, string[] extendedPermissions, bool logout) { IDictionary<string, object> loginParameters = new Dictionary<string, object> { { "response_type", "token" }, { "display", "popup" } }; _navigateUri = FacebookOAuthClient.GetLoginUrl(appId, null, extendedPermissions, logout, loginParameters); InitializeComponent(); } I remember it working in the past, but it no longer works now. (Which truly puzzles me...) It instead now directs the user to the Facebook mobile page, where the user must manually logout. Now, I could do browser automation to automatically click the logout link for the user, however, this is prone to breaking if Facebook updates the mobile UI. It is also messy, and possibly a worse solution than trying to use the JavaScript SDK FB.logout() method (though not by much). I have searched for some kind of documentation, however, I cannot find anything in the Facebook developer documentation that illustrates how to logout an application. Has anyone solved this problem, or seen any documentation that can be ported to work with the Facebook C# SDK? I am certainly open to using a WebClient or HttpClient/Response if anyone can point to some documentation that could work with it. I simply have not been able to find any low-level documentation that shows how this approach could work. Thank you in advance for any advice, pointers, or links.

    Read the article

  • Termite colony simulator using java

    - by ashii
    hi everyone, i hve to design a simulator that will maintain an environment, which consists of a collection of patches arranged in a rectangular grid of arbitrary size. Each patch contains zero or more wood chips. A patch may be occupied by one or more termites or predators, which are mobile entities that live within the world and behave according to simple rules. A TERMITE can pick up a wood chip from the patch that it is currently on, or drop a wood chip that it is carrying. Termites travel around the grid by moving randomly from their current patch to a neighbouring patch, in one of four possible directions. New termites may hatch from eggs, and this is simulated by the appearance of a new termite at a random patch within the environment. A PREDATOR moves in a similar way to termites, and if a predator moves onto a patch that is occupied by a termite, then the predator eats the termite. At initialization, the termites, predators, and wood chips are distributed randomly in the environment. Simulation then proceeds in a loop, and the new state of the environment is obtained at each iteration. i have designed the arena using jpanel but im not able to randomnly place wood,termite and predator in that arena. can any one help me out?? my code for the arena is as following: 01 import java.awt.*; 02 import javax.swing.*; 03 04 public class Arena extends JPanel 05 { 06 private static final int Rows = 8; 07 private static final int Cols = 8; 08 public void paint(Graphics g) 09 { 10 Dimension d = this.getSize(); 11 // don't draw both sets of squares, when you can draw one 12 // fill in the entire thing with one color 13 g.setColor(Color.WHITE); 14 // make the background 15 g.fillRect(0,0,d.width,d.height); 16 // draw only black 17 g.setColor(Color.BLACK); 18 // pick a square size based on the smallest dimension 19 int sqsize = ((d.width<d.height) ? d.width/Cols : d.height/Rows); 20 // loop for rows 21 for (int row=0; row<Rows; row++) 22 { 23 int y = row*sqsize; // y stays same for entire row, set here 24 int x = (row%2)*sqsize; // x starts at 0 or one square in 25 for (int i=0; i<Cols/2; i++) 26 { 27 // you will only be drawing half the squares per row 28 // draw square 29 g.fillRect(x,y,sqsize,sqsize); 30 // move two square sizes over 31 x += sqsize*2; 32 } 33 } 34 35 } 36 37 38 39 public void update(Graphics g) { paint(g); } 40 41 42 43 public static void main (String[] args) 44 { 45 46 JFrame frame = new JFrame("Arena"); 47 frame.setSize(600,400); 48 frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); 49 frame.setContentPane(new Arena()); 50 frame.setVisible(true); 51 } 52 53 }

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 2

    - by AjarnMark
    In Part 1, I started talking about using Red-Gate’s newest version of SQL Source Control and how I really like it as a viable method to source control your database development.  It looks like this is going to turn into a little series where I will explain how we have done things in the past, and how life is different with SQL Source Control.  I will also explain some of my philosophy and methodology around deployment with these tools.  But for now, let’s talk about some of the good and the bad of the tool itself. More Kudos and Features I mentioned previously how impressed I was with the responsiveness of Red-Gate’s team.  I have been having an ongoing email conversation with Gyorgy Pocsi, and as I have run into problems or requested things behave a little differently, it has not been more than a day or two before a new Build is ready for me to download and test.  Quite impressive! I’m sure much of the requests I put in were already in the plans, so I can’t really take credit for them, but throughout this conversation, Red-Gate has implemented several features that were not in the first Early Access version.  Those include: Honoring the Fortress configuration option to require Work Item (Bug) IDs on check-ins. Adding the check-in comment text as a comment to the Work Item. Adding the list of checked-in files, along with the Fortress links for automatic History and DIFF view Updating the status of a Work Item on check-in (e.g. setting the item to Complete or, in our case “Dev-Complete”) Support for the Fortress 2.0 API, and not just the Vault Pro 5.1 API.  (See later notes regarding support for Fortress 2.0). These were all features that I felt we really needed to have in-place before I could honestly consider converting my team to using SQL Source Control on a regular basis.  Now that I have those, my only excuse is not wanting to switch boats on the team mid-stream.  So when we wrap up our current release in a few weeks, we will make the jump.  In the meantime, I will continue to bang on it to make sure it is stable.  It passed one test for stability when I did a test load of one of our larger database schemas into Fortress with SQL Source Control.  That database has about 150 tables, 200 User-Defined Functions and nearly 900 Stored Procedures.  The initial load to source control went smoothly and took just a brief amount of time. Warnings Remember that this IS still in pre-release stage and while I have not had any problems after that first hiccup I wrote about last time, you still need to treat it with a healthy respect.  As I understand it, the RTM is targeted for February.  There are a couple more features that I hope make it into the final release version, but if not, they’ll probably be coming soon thereafter.  Those are: A Browse feature to let me lookup the Work Item ID instead of having to remember it or look back in my Item details.  This is just a matter of convenience. I normally have my Work Item list open anyway, so I can easily look it up, but hey, why not make it even easier. A multi-line comment area.  The current space for writing check-in comments is a single-line text box.  I would like to have a multi-line space as I sometimes write lengthy commentary.  But I recognize that it is a struggle to get most developers to put in more than the word “fixed” as their comment, so this meets the need of the majority as-is, and it’s not a show-stopper for us. Merge.  SQL Source Control currently does not have a Merge feature.  If two or more people make changes to the same database object, you will get a warning of the conflict and have to choose which one wins (and then manually edit to include the others’ changes).  I think it unlikely you will run into actual conflicts in Stored Procedures and Functions, but you might with Views or Tables.  This will be nice to have, but I’m not losing any sleep over it.  And I have multiple tools at my disposal to do merges manually, so really not a show-stopper for us. Automation has its limits.  As cool as this automation is, it has its limits and there are some changes that you will be better off scripting yourself.  For example, if you are refactoring table definitions, and want to change a column name, you can write that as a quick sp_rename command and preserve the data within that column.  But because this tool is looking just at a before and after picture, it cannot tell that you just renamed a column.  To the tool, it looks like you dropped one column and added another.  This is not a knock against Red-Gate.  All automated scripting tools have this issue, unless the are actively monitoring your every step to know exactly what you are doing.  This means that when you go to Deploy your changes, SQL Compare will script the change as a column drop and add, or will attempt to rebuild the entire table.  Unfortunately, neither of these approaches will preserve the existing data in that column the way an sp_rename will, and so you are better off scripting that change yourself.  Thankfully, SQL Compare will produce warnings about the potential loss of data before it does the actual synchronization and give you a chance to intercept the script and do it yourself. Also, please note that the current official word is that SQL Source Control supports Vault Professional 5.1 and later.  Vault Professional is the new name for what was previously known as Fortress.  (You can read about the name change on SourceGear’s site.)  The last version of Fortress was 2.x, and the API for Fortress 2.x is different from the API for Vault Pro.  At my company, we are currently running Fortress 2.0, with plans to upgrade to Vault Pro early next year.  Gyorgy was able to come up with a work-around for me to be able to use SQL Source Control with Fortress 2.0, even though it is not officially supported.  If you are using Fortress 2.0 and want to use SQL Source Control, be aware that this is not officially supported, but it is working for us, and you can probably get the work-around instructions from Red-Gate if you’re really, really nice to them. Upcoming Topics Some of the other topics I will likely cover in this series over the next few weeks are: How we used to do source control back in the old days (a few weeks ago) before SQL Source Control was available to Vault users What happens when you restore a database that is linked to source control Handling multiple development branches of source code Concurrent Development practices and handling Conflicts Deployment Tips and Best Practices A recap after using the tool for a while

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >