Search Results

Search found 421 results on 17 pages for 'artifacts'.

Page 12/17 | < Previous Page | 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Imagick: gifs, and background color

    - by TheButch3r
    Hi, I have 2 questions about using the php imagick class that I can't get working properly... 1.) I tried resizing / cropping a gif, and treated it as a normal image. It seemed to work, except that a few frames in the middle of the animation contained a small patch of artifacts. Is there a setting that should be used when working with multi-frame gifs? 2.) Is there a simple command to resize images if they are too small by just adding a background? For instance, if an image is 10x10, could I add a white background, and have a 100x100px image with the original in the upper left corner? Thanks for any help!

    Read the article

  • What would you use for deployment scripts in Java?

    - by Nadav
    Hi, I'm working on a Java web project that uses Maven to build its artifacts. At the end of the Maven build we have a few jar and war files that we need to deploy onto our development/testing environment. Right now we're using a pretty hefty Ant script that performs several tasks (on both Windows/Linux machines) Starts/Stops services Copies/deletes files Builds some stuff and then executes it Etc Ant does the job well - but the script is quickly getting very large, and to be honest, it feels inadequate for the task at hand. Are there other alternatives? I've heard of gant, but I'm not sure that's the right way to go. Thanks for helping!

    Read the article

  • deriving activity diagram-based GUIs and CRUD them with a DB?

    - by Xin Tanaka
    i received a big book full of processes. i was thinking about the end user (they will be lawyers) and decided the best GUI would be showing activity diagrams or business processes. It reminded me Quickbooks and how non-accountants can successfully use it and understand accounting processes. i began doing research before sending my project to a bunch of programmers: is there some open source solution? can i use MS Visio libraries? which UML tool is programable? what about Eclipse and its modeling tools? etc etc the key points here are: relationships between events, artifacts, actors, etc should be stored in a database. processes or steps in a process should be easily modified by updating the database do this sounds too crazy? (should I explain a bit more why it must be programmed this way?)

    Read the article

  • Maven downloads have .lastUpdated as extension

    - by user303158
    I am a beginner to Eclipse and maven. I have an Eclipse setup with m2eclipse and subversive. I have imported a maven2 project from svn. But I get the error message that a whole bunch of artifacts are missing (for instance: Missing artifact org.springframework:spring-test:jar:3.0.1.RELEASE:test). If I look in my repository I see the jar files there but they have an extra extension .lastUpdated. Why is maven appending .lastUpdated to the jars? And more importantly: how can I fix this? Btw there is no mention of the type lastUpdated in my POMs.

    Read the article

  • Manage groups of build configurations in Hudson

    - by Lóránt Pintér
    I'm using Hudson to build my application. I have several branches that come and go. Whenever there's a new branch, I have to set up the following builds for it: a continuous build that runs after every change in SVN a nightly build a nightly site generation (I'm using Maven under the hood) and a weekly integration build for some branches currently this means I need to copy four template configurations and set them up with the branch URL. I don't like this for two reasons: It's redundant, so modifying something is error-prone and takes a lot of time. I need four full checkouts of the product per branch on every build slave, plus four separate private Maven repository, not to mention the built artifacts. This is a lot of space wasted. What I'd like instead is to have one workspace and one configuration for allthese builds. Is this possible with Hudson?

    Read the article

  • What do gurus say about Requirements Traceability Matrix?

    - by Jaywalker
    Our organization is at CMMI Level 2, and as part of the requirements of the level, we have to maintain an RTM which more or less contains the following entries for each requirement: Requirement Description Reference Section Functional Specification Document Reference Section Design Document Reference Section Test Cases Document Now, this might be an overkill for a small project. But more importantly, this could be a nightmare to maintain when the requirements/ features keep changing, and documents have to be constantly updated. What do the gurus say about this? Should one avoid such level of documentation or are there any tools to manage when a "change" out dates so many artifacts? And by using the term 'gurus', I am not talking of coding champs; rather people like Steve McConnel or others who have worked on commercial projects of medium to large scale. Quotes/ book references/ articles will suit me. EDIT: It's not just requirements that change. Design Document can change; well, even test cases may change.

    Read the article

  • Bandpass filter of FFT applied image. (Like ImageJ bandpass filter algorithm)

    - by maximus
    There is a good function that I need, which is implemented in Java program: ImageJ I need to understand the algorithm used there. The function has several parameters: link text And before using FFT it converts image to a special one: The Bandpass Filter uses a special algorithm to reduce edge artifacts (before the Fourier transform, the image is extended in size by attaching mirrored copies of image parts outside the original image, thus no jumps occur at the edges) Can you tell me more about this special transform? Actually tiling mirrored image. I am writing on C++ and wish to rewrite that part of the program on C++.

    Read the article

  • Building NDK app with Android ADT on Windows

    - by Michael Sh
    While there are tons of information on the topic, there is no clear guide on how to compile C++ code in ADT. Is Cygwin is required? Where the build artifacts go? How to confogure the destination folder for the build package? Are there a debug and release versions? Is it possible to debug and step through the C++ code in ADT? Maybe it all is described in a single resource, then a link would be welcome!

    Read the article

  • Disabling security warning caused by BaseIntermediateOutputPath?

    - by Chris R. Donnelly
    Hi all, Our team overrides BaseIntermediateOutputPath (and other related properties) in our Visual Studio projects in order to have build artifacts go outside the main tree. However, this causes an annoying warning dialog to appear when you open a project for the first time in a new location (which happens on new machines, when you check out a branch to a new location, have to delete corrupted .suo/.user files, etc.). Is there any way to disable the warning? FYI, we are using Visual Studio 2008, and we have encountered this warning on Windows XP as well as Windows 7, so it is not UAC-related.

    Read the article

  • How to use JAXB APIs to generate classes from xsd?

    - by Simran
    I need to generate bean classes from .xsd without using xjc command or ant. i have found the implementation in Apache Axis2 but i am unable to generate the artifacts. i have written the following code but i get NullPointerException : SchemaCompiler sc = XJC.createSchemaCompiler(); URL url = new URL("file://E:\\JAXB\\books.xsd"); sc.parseSchema(new InputSource(url.toExternalForm())); S2JJAXBModel model = sc.bind(); JCodeModel cm = model.generateCode(null, null); cm.build(new FileCodeWriter(new File("E:\\JAXBTest"))); Can anyone help me / provide some links???

    Read the article

  • How can I view javadocs that have been added to a maven repository?

    - by Coderer
    I know I can use maven to pull the javadocs for an artifact (if they've been added), which should come through as a JAR (right?), which I can then unpack and browse. My problem is, I'm trying to figure out which of a series of artifacts have the particular package I'm hunting for. I could download a half-dozen javadoc packages, unpack all of them, then open the index files one at a time, but that sounds like it would be pretty unpleasant. There must be a better way! I noticed that Nexus supports javadoc artifact browsing, but apparently only in the Pro version (which we do not have, and are unlikely to get any time soon). I'd prefer a GUI solution, though I'd settle for a command line answer, provided it's something along the lines of showmethedocs groupId:artifactId [version].

    Read the article

  • How to pass binaries build upstream to a remote downstream build slave

    - by sbi
    We're using hudson on Windows to build a .NET solution and run the unit tests (NUnit). Hudson is thereby used to start batch files that do the actual work. I am now trying to set up a new test that is to run on a build slave and will run very long. The test should use the binaries produced by the upstream build. I have searched the hudson documentation but I cannot find how to pass upstream build artifacts to downstream slaves. How do I do this?

    Read the article

  • automatically downloading and installing an exe file

    - by dbphydb
    I need to automate the process of downloading and installing an exe file abc.exe from say 'http://10.34.45.21:8080/cruisecontrol/artifacts/xxx_trunk_nightly_build/xxx/test/' I mean now i have to manually goto ''http://10.34.45.21:8080/cruisecontrol' then click on each folder before i finally click on the abc.exe file. Then it downloads the exe on my machine. Then i have to double click on the exe to install it. I want to automate this whole process such that when i run the script it will automatically download the exe file and install it. Is it possible to do this using php? I am very much a beginner. Any help will be of use.

    Read the article

  • How can I resolve naming conflict in given precompiled libraries?

    - by asm
    I'm linking two different libraries that have functions with exactly same name (it's opengl32.lib and libgles_cm.lib - OpenGL ES emulation under Win32 platform), and I want to be able to specify, which version I'm calling. I'm porting a game to OpenGL ES, and what I want to achieve, is a split-screen rendering, where left side is an OpenGL version, and right side is a ES version. To produce the same result, they will recieve slightly different calls, and I'll be able to visually compare them, effectively finding visual artifacts. It worked perfectly with OpenGL/DirectX at the same window, but now the problem is that both versions imports the functions with the same name, like glDrawArrays, and only one version is imported. Unfortunately, I don't have sources of any of that libraries. Is there a way to... I dont' know, wrap one library into additional namespace before linking (with calls like ES::glDrawArrays), somehow rename some of functions or do anything else? I'm using microsoft compiler now, but if there will be solution with another one (GCC/ICC), I'll switch to it.

    Read the article

  • On a Hudson master node, what are the .tmp files created in the workspace-files folder?

    - by Patrick Johnmeyer
    Question: In the path HUDSON_HOME/jobs/<jobname>/builds/<timestamp>/workspace-files, there are a series of .tmp files. What are these files, and what feature of Hudson do they support? Background Using Hudson version 1.341, we have a continuous build task that runs on a slave instance. After the build is otherwise complete, including archiving the artifacts, task scanner, etc., the job appears to hang for a long period of time. In monitoring the master node, I noted that many .tmp files were being created and modified under builds//workspace=files, and that some of them were very large. This appears to be causing the delay, as the job completed at the same time that files in this path stopped changing. Some key configuration points of the job: It is tied to a specific slave node It builds in a 'custom workspace' It triggers a downstream job that builds in the same custom workspace on the same slave node

    Read the article

  • jQuery delay doesn't work as expected

    - by Chris
    I have the following jQuery code $("#dropdown").hover(function() { $(this).stop(true,true).fadeTo('fast',1); $("#options").stop(true,true).slideDown(); }, function() { $(this).delay(1000).stop(true,true).fadeTo('fast',0.1); $("#options").delay(1000).stop(true,true).slideUp(); } ); What I expect to happen is when the mouse leaves #dropdown it will wait 1 second before continuing. This is not happening. What I am trying to achieve, in case there is a better way, is to leave the drop down menu visible for a second or two after moving your mouse and I would also like to prevent the events from happening again to prevent artifacts and "funnies" if you were to move the mouse over and out from the div very quickly

    Read the article

  • How do you use build labels in publishers in cruisecontrol?

    - by Omnifarious
    I have this section in my CruiseControl config.xml file: <publishers> <onsuccess> <artifactspublisher dest="artifacts/${project.name}" file="projects/${project.name}/fred"/> <execute command="hg -R hg-succeeded/${project.name} pull"/> <execute command="hg -R hg-succeeded/${project.name} tag -l build-${label} -r tip"/> </onsuccess> </publishers> I'm getting tags that look like build-${label}. The ${label} part isn't being replaced by the build label like I expect. I'm expecting something like build.1 to show up in place of ${label}. How do I make this happen? I do have the default labelincrementer configured with a <labelincrementer /> tag in my project. Also, the CruiseControl documentation is absolutely awful. Is there better documentation anywhere?

    Read the article

  • Should we use Nexus or Artifactory for a Maven Repo?

    - by John Stauffer
    We are using Maven for a large build process ( 100 modules). We have been storing our external dependencies in source control, and using that to update a local repo. However, we are ready to graduate to a local repo that can cache central so that we don't have to proactively download all 3rd parties (but we can still have a local repo to pull from). In addition we want to publish our internal build artifacts from a nightly build so that developers don't have to build the world. We are considering Nexus and Artifactory. What are the reasons for preferring one over the other? Are there others we should be considering?

    Read the article

  • debate: Is adding third party libraries to a war a good idea?

    - by Master Chief
    We have a debate going on . a. The "standard" way of assembling a web app. Create a WAR with all our app artifacts and all other components like hibernate and memcached etc are deployed in the tomcat/shared/lib area. b. Create a humongous war with everything included and nothing in tomcat/shared/lib. Pros for a - It keeps things modular and the war is small. Cons for a - dependency on shared/lib has to be managed especially by the deployment process. Pros for b - All dependencies are controlled by the build process removing any room for error. Cons for b - War is really, really big. If you are deploying over a network to a huge farm, then it might have an impact. want to see what thoughts others might have about this.

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • TechEd 2012: Day 3 &ndash; Morning TFS

    - by Tim Murphy
    My morning sessions for day three were dominated by Team Foundation Server.  This has been a hot topic for our clients lately, so this topic really stuck a chord. The speaker for the first session was from Boeing.  It was nice to hear how how a company mixes both agile and waterfall project management.   The approaches that he presented were very pragmatic.  For their needs reporting is the crucial part of their decision to use TFS.  This was interesting since this is probably the last aspect that most shops would think about. The challenge of getting users to adopt TFS was brought up by the audience.  As with the other discussion point he took a very level headed stance.  The approach he was prescribing was to eat the elephant a bite at a time instead of all at once.  If you try to convert you entire shop at once the culture shock will most likely kill the effort. Another key point he reminded us of is that you need to make sure that standards and compliance are taken into account when you setup TFS.  If you don’t implement a tool and processes around it that comply with the standards bodies that govern your business you are in for a world of hurt. Ultimately the reason they chose TFS was because it was the first tool that incorporated all the ALM features that they needed. Reduced licensing cost because of all the different tools they would need to buy to complete the same tasks.  They got to this point by doing an industry evaluation.  Although TFS came out on top he said that it still has a big gap is in the Java area.  Of course in this market there are vendors helping to close that gap. The second session was on how continuous feedback in agile is a new focus in VS2012.  The problems they intended to address included cycle time and average time to repair, root cause analysis. The speakers fired features at us as if they were firing a machine gun.  I will just say that I am looking forward to digging into the product after seeing this presentation.  Beyond that I will simply list some of the key features that caught my attention. Feature – Ability to link documents into tasks as artifacts Web access portal PowerPoint storyboards Exploratory testing Request feedback (allows users to record notes, screen shots and video/audio) See you after the second half. del.icio.us Tags: TechEd,TechEd 2012,TFS,Team Foundation Server

    Read the article

  • VSDB to SSDT Part 1 : Converting projects and trimming excess files

    - by Etienne Giust
    Visual Studio 2012 introduces a change regarding Database Projects : they now use the SSDT technology, which means old VS2010 database projects (VSDB projects) need to be converted. Hopefully, VS2012 does that for you and it is quite painless, but in my case some unnecessary artifacts from the old project were left in place.  Also, when reopening the solution, database projects appeared unconverted even if I had converted them in the previous session and saved the solution.   Converting the project(s) When opening your Visual Studio 2010 solution with Visual Studio 2012, every standard project should be converted by default, but Visual Studio will ask you about your database projects : “Functional changes required Visual Studio will automatically make functional changes to the following projects in order to open them. The project behavior will change as a result. You will be able to open these projects in this version and Visual Studio 2010 SP1.” If you accept, your project is converted. And it should compile with no errors right away except if you have dependencies to dbschema files which are no longer supported.   The output of a SSDT project is a dacpac file which replaces the dbschema file you were accustomed to. References to dacpac files can be added to SSDT projects in the same fashion references to dbschema could be added to VSDB projects.   Cleaning up You will notice that your project file is now a sqlproj file but the old dbproj is still here. In fact at that point you can still reopen the solution in Visual Studio 2010 and everything should show up.   If like me you plan on using VS2012 exclusively, you can get rid of the following files which are still on your disk and in your source control : the dbproj and dbproj.vspscc files Properties/Database.sqlcmdvars Properties/Database.sqldeployment Properties/Database.sqlpermissions Properties/Database.sqlsettings   You might wonder where the information which used to be in the Properties files is now stored. Permissions : a Permissions.sql was created at the root level of your project. Note that when you create a new database project and import a database using the Schema Compare capabilities from Visual Studio, imported table and stored procedure definition files will hold the permission information (along with constraints and, indexes) SQLVars : they are defined inside the publish.xml files Deployment : they are also in the publish.xml files Settings : I was unable to find where those are now. I suppose they are not defined anymore   But Visual Studio still says my database projects should be converted ! I had this error upon closing and then re-opening the solution : my database projects would appear unconverted even though I did all the necessary steps previously.   Easy solution : remove those projects from the solution and add them again (the sqlproj files).   More For those who run into problems when converting from VSDB to SSDT, I suggest reading the following post : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion-issues.aspx   Also interesting, is a side by side comparison of VSDB and SSDT project features : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspx

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-19

    - by Bob Rhubart
    BPM Process Accelerator Packs – Update | Pat Shepherd Architect Pat Shepherd shares several resources relevant to the new Oracle Process Accelerators for Oracle Business Process Management. Oracle BI EE Management Pack Now Available for Oracle Enterprise Manager 12cR2 | Mark Rittman A handy and informative overview from Oracle ACE Director Mark Rittman. WebSockets on WebLogic Server | Steve Button "As there's no standard WebSocket Java API at this point, we've chosen to model the API on the Grizzly WebSocket API with some minor changes where necessary," says James "Buttso" Buttons. "Once the results of JSR-356 (Java API for WebSocket) becomes public, we'll look to implement and support that." Oracle Reference Architecture: Software Engineering This document from the IT Strategies from Oracle library focuses on integrated asset management and the need for efffective asset metadata management to insure that assets are properly tracked and reused in a manner that provides a holistic functional view of the enterprise. The tipping point for cloud management is nigh | Cloud Computing - InfoWorld "Businesses typically don't think too much about managing IT resources until they become too numerous and cumbersome to deal with on an ad hoc basis—a point many companies will soon hit in their adoption of cloud computing." — David Linthicum DevOps Basics: Track Down High CPU Thread with ps, top and the new JDK7 jcmd Tool | Frank Munz "The approach is very generic and works for WebLogic, Glassfish or any other Java application," say Frank Munz. "UNIX commands in the example are run on CentOS, so they will work without changes for Oracle Enterprise Linux or RedHat. Creating the thread dump at the end of the video is done with the jcmd tool from JDK7." Frank has captured the process in the posted video. OIM 11g R2 UI customization | Daniel Gralewski "OIM user interface customizations are easier now, and they survive patch applications--there is no need to reapply them after patching," says Fusion Middleware A-Team member Daniel Gralewski. "Adding new artifacts, new skins, and plugging code directly into the user interface components became an easier task." Daniel shows just how easy in this post. Thought for the Day "I have yet to see any problem, however complicated, which, when looked at in the right way, did not become still more complicated." — Poul Anderson (November 25, 1926 – July 31, 2001) Source: SoftwareQuotes.com

    Read the article

  • 2011 - ALMs for your development team and the people they work with.

    - by David V. Corbin
    Welcome to 2011, it is already shaping up to be a very exciting year. The title of the post is not about charitable giving, although that is also a great topic. Application Lifecycle Management and the Systems that support the environment is, and 2011 will be a year where I expect many teams to invest heavily in this area. For those not familiar with ALM, it can be simplified down to "A comprehensive view of all of the iteas, requirements, activities and artifacts that impact an application over the course of its lifecycle, from concept until decommissioning". Obviously, this encompases a large number of different areas even for relatively small and medium sized projects. In recent years, many teams have adapted methodoligies which address individual aspects of this; but the majority of this adoption has resulted in "islands of improvement" rather than the desired comprehensive outcome...Until now! Last year Microsoft released Team Foundation Server 2010 along with Visual Studio 2010 Ultimate Edition, and with these two in combination the situation has drastically changed. At last there is a single environment that is capable of handling all aspects of ALM, and is also capable of dealing with migration and integration with existing systems to make the transition to a single solution much easier. Thse possibilities (and practicalities) are nothing short of amazing, Architecture thru Testing integration? YES. Being able to correlate specific requirement items (and their history) to actual code (and code history)? YES. Identification of which tests will be potentially impacted by a given code change? YES. Resiliant Automated Testing of User Interfaces? YES. Automatic Deployment Management? YES. Integraton Level testing as part of (designated) Builds? YES. I could easily double or triple the above list, but these items should be enough to get you thinking about the "pain points" your team and organization currently face and the fact that there IS a way to relieve the pain. Over the course of the year, I am hoping to bring together some of the "best of breed" information, along with hosting (and participating in) discussions with various experts in the field. There are already a number of groups (including many on LinkedIn) that have an ALM focus, and I encourage everyone out to check them out. I will be posting a list of the ones I find most helpful in the not too distant future. As I said at the beginning, 2011 is shaping up to be a very interesting (and productive) year. Why wait to start investigating and adopting ALM? ps: For those interested in becoming an "Alms Giver" in the charitable sense, I highly recommend checking out GiveCamp. A group of developers, designers and others get together to create a solution for a charity in just under 48 hours. I will be attending the GiveCamp in New York City on Jan 14-16, more information is available at nycgivecamp.org/

    Read the article

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17  | Next Page >