Search Results

Search found 6882 results on 276 pages for 'git repository'.

Page 249/276 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • Gnome-shell fails to load on 12.10

    - by Githlar
    I'm usually the one answering questions, but in this I'm throughly stumped! My Setup: Ubuntu 12.10 (Dist upgrade form 12.04) ATI M96 [Mobility Radeon HD 4650] Upon the first installation of 12.10 I had all kinds of issues getting the Legacy ATI drivers to install (I guess the source for the drivers isn't kosher with kernel 3.5). So, I added the repository ppa:makson96/fglrx - which has a version of the ATI source patched to work with kernel 3.5. After installation of fglrx-legacy from that PPA, gnome-shell and all my graphics work fine... until today. The Problem I unsuspended my computer today and the screen was black (not off, the black from the gnome lock screen). I'd move my mouse/hit a key and the background would flash and then it'd go back to black. Restarted via VT1 Logged into Gnome (gnome-shell) session, but no gnome-shell! Investigation: First, I went to VT1 and tried export DISPLAY=:0;gnome-shell --replace. It appeared to work fine, switch back to X and nothing. Went back to VT1 and saw this error message: JS ERROR: !!! Exception was: TypeError: Object 0x7fc748129c30 is not a subclass of (null), it's a xO JS ERROR: !!! message = '"Object 0x7fc748129c30 is not a subclass of (null), it's a xO"' JS ERROR: !!! fileName = '"/usr/share/gnome-shell/js/ui/tweener.js"' JS ERROR: !!! lineNumber = '218' JS ERROR: !!! stack = '"()@/usr/share/gnome-shell/js/ui/tweener.js:218 wrapper()@/usr/share/gjs-1.0/lang.js:204 ()@/usr/share/gjs-1.0/lang.js:145 ()@/usr/share/gjs-1.0/lang.js:239 init()@/usr/share/gnome-shell/js/ui/tweener.js:49 init()@/usr/share/gnome-shell/js/ui/environment.js:96 @<main>:1 "' Window manager warning: Log level 32: Execution of main.js threw exception: TypeError: Object 0x7fc748129c30 is not a subclass of (null), it's a xO Note: Everywhere it says "it's a xO", xO is actually garbled and changes every time (I'm thinking memory corruption?) This error is thrown by line 96 of /usr/share/gnome-shell/js/ui/environment.js: tweener.Init() Did a purge of fglrx-legacy, reboot, reinstall fglrx-legacy, reboot... same thing. Did a ppa-purge of ppa:gnome3-team/gnome3, and reinstalled gnome-shell and ubuntu-desktop from the standard repositores... same thing. I'm really at a loss here. I love gnome-shell and after using it for nearly a year now gnome classic just seems so archaic. Additional Information Apt log from the day I first suspended my machine (these are upgrades from the gnome3-team/gnome3 ppa and ubuntu-wine/ppa ppa): Start-Date: 2012-11-24 17:30:28 Commandline: aptdaemon role='role-commit-packages' sender=':1.618' Install: gkbd-capplet:amd64 (3.6.0-0ubuntu1), gnome-control-center-unity:amd64 (1.0-0ubuntu1~ubuntu12.10.1) Upgrade: nautilus:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1), libgnome-control-center1:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), wine1.5-i386:i386 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), wine1.5:amd64 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), gnome-settings-daemon:amd64 (3.4.2-0ubuntu14, 3.6.3-0ubuntu1~ubuntu12.10.1), gnome-control-center-data:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), gnome-accessibility-themes:amd64 (3.6.0.2-0ubuntu1, 3.6.2-0ubuntu2~ubuntu12.10.1), gnome-themes-standard:amd64 (3.6.0.2-0ubuntu1, 3.6.2-0ubuntu2~ubuntu12.10.1), wine1.5-amd64:amd64 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), nautilus-data:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1), gnome-control-center:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), libnautilus-extension1a:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1) End-Date: 2012-11-24 17:31:32 fglrxinfo (driver appears to be working): display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 4650 OpenGL version string: 3.3.11653 Compatibility Profile Context Does anybody have any further ideas?

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • Using NSpec at various architectural layers

    - by nono
    Having read the quick start at nspec.org, I realized that NSpec might be a useful tool in a scenario which was becoming a bit cumbersome with NUnit alone. I'm adding an OAuth (or, DotNetOpenAuth) to a website and quickly made a mess of writing test methods such as [Test] public void UserIsLoggedInLocallyPriorToInvokingExternalLoginAndExternalLoginSucceedsAndExternalProviderIdIsNotAlreadyAssociatedWithUserAccount() { ... } ... and I wound up with maybe a dozen permutations of this theme, for the user already being logged in locally and not locally, the external login succeeding or failing, etc. Not only were the method names unwieldy, but every test needed a setup that contained parts in common with a different set of other tests. I realized that NSpec's incremental setup capabilities would work great for this, and for a while I was trucking a long wonderfully, with code like act = () => { actionResult = controller.ExternalLoginCallback(returnUrl); }; context["The user is already logged in"] = () => { before = () => identity.Setup(x => x.IsAuthenticated).Returns(true); context["The external login succeeds"] = () => { before = () => oauth.Setup(x => x.VerifyAuthentication(It.IsAny<string>())).Returns(new AuthenticationResult(true, providerName, "provideruserid", "username", new Dictionary<string, string>())); context["External login already exists for current user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Should add 'login sucessful' alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("Login successful"); alerts[0].AlertType.should_be(AlertType.Success); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; context["External login already exists for another user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForAnyOtherUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Adds an error alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("The external login you requested is already associated with a different user account"); alerts[0].AlertType.should_be(AlertType.Error); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; This approach seemed to work magnificently until I prepared to write test code for my ApplicationServices layer, to which I delegate viewmodel manipulation from my MVC controllers, and which coordinates the operations of the lower data repository layer: public void CreateUserAccountFromExternalLogin(RegisterExternalLoginModel model) { throw new NotImplementedException(); } public void AssociateExternalLoginWithUser(string userName, string provider, string providerUserId) { throw new NotImplementedException(); } public string GetLocalUserName(string provider, string providerUserId) { throw new NotImplementedException(); } I have no idea what in the world to name the test class, the test methods, or even if I should perhaps include the testing for this layer into the test class from my large code snippet above, so that a single feature or user action could be tested without regard to architectural layering. I can't find any tutorials or blog posts which cover more than simple examples, so I would appreciate any recommendations or pointing in the right direction. I would even welcome "your question is invalid"-type answers as long as some explanation is provided.

    Read the article

  • Changing the Operating System with only Ubuntu installed

    - by Games Brainiac
    I really wanted to dive into the world of Open Source operating systems, so I downloaded the latest version of Ubuntu (13.10), and installed it on a clean(no operating system installed, absolutely nothing) Lenovo ThinkPad machine. After a few days, I wanted to try out a different Operating System (Elementary OS). I downloaded the ISO file, burned it to a USB, tested that the USB booted from a different computer (I have 2, one is the Lenovo, the other a HP). I was able to get the bootscreen, and everything worked like a charm after I set the BIOS to boot from USB Disk Drive instead of HD. After this, I went back to Lenovo, and tried to open up the boot menu, by pressing F12, so that I could load from a temporary device. To my surprise, nothing but the HD was listed. There was no Optical Drive, No USB Drive, absolutely nothing. So, I thought that these devices were probably disabled. So I went into my BIOS and checked to see what was the case. I saw that all my devices were enabled. USB and all the other devices such as network cable and the rest were all enabled. So, I thought this probably had something to do wit UEFI and Legacy Boot options. So, I made sure that both were enabled. This did not solve the problem either. Again, I got nothing but the option to boot from my Hard Disk. I thought the USB had to be at fault. I tried different ports, but to no avail. Next, I tried with a Live CD, which had Ubuntu on it. This failed too. I simply could not boot from anything other than my hard disk. Okay, so at this point, I was pretty desperate, so I installed Boot-Repair through: sudo add-apt-repository ppa:yannubuntu/boot-repair sudo apt-get update sudo apt-get install boot-repair What this did is lead me to GRUB. Ideally, its just a screen that gives me the option to load from Ubuntu or Advanced Settings. The Advanced settings had nothing but Ubuntu options in it. So, I kept on pressing ESC and that led me to the the grub console, and thats where I am right now with my Lenovo. I've also tried updating the BIOS, but Lenovo only has packages for Red Hat and Windows. So, a dead end there too. Right now, I need to know if there is any way that I can just delete everything from my Lenovo? I want to revert it back to its blank factory condition. How can I achieve this? I have tried to elaborate my problem as best I could. If there is any important information that I've missed out, please do not hesitate to leave a comment. I would have included some screen shots, but BIOS screen shots are a little hard to manage. However, I can provide a camera Image of the boot screen if needed (doing that as we speak).

    Read the article

  • How to troubleshoot errors with TeamCity

    - by Tomas Lycken
    I'm following this guide to set up a small environment for source control and automated builds - mostly for learning what it is and how it works, but also for using in those of my hobby projects that I believe will actually be useful some day. However, at the step where he commits and builds, I fail to get a success status in the TeamCity history log. I keep getting the error described in the stack trace below. I have verified with Windows Explorer that the solution file it can't find is actually there, so I really don't know what to do. How do I fix/troubleshoot this? [15:16:06]: Checking for changes [15:16:08]: Clearing temporary directory: C:\Program Files\JetBrains\BuildAgent\temp\buildTmp [15:16:08]: Checkout directory: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:08]: Updating sources: server side checkout... [15:16:08]: [Updating sources: server side checkout...] Building incremental patch for VCS root: DemoProjects [15:16:09]: [Updating sources: server side checkout...] Repository sources transferred [15:16:09]: [Updating sources: server side checkout...] Updating C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:10]: Start process: "c:\Program Files\JetBrains\BuildAgent\bin\..\plugins\dotnetPlugin\bin\JetBrains.BuildServer.MsBuildBootstrap.exe" "/workdir:C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588" /msbuildPath:C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe [15:16:10]: in: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:11]: TeamCity MSBuild bootstrap v5.1 Copyright (C) JetBrains s.r.o. [15:16:11]: Application failed with internal error: [15:16:11]: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: System.Exception: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Impl.MSBuildBootstrapFactory.Create(IClientRunArgs args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap.Core\src\Impl\MSBuildBootstrapFactory.cs:line 25 [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Program.Run(String[] _args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap\src\Program.cs:line 66 [15:16:11]: Process exited with code -11 [15:16:11]: Build finished

    Read the article

  • Why does this Maven command produce a LifecycleExecutionException?

    - by ovr
    COMMAND: mvn org.apache.maven.plugins:maven-archetype-plugin:2.0-alpha-4:generate \ -DarchetypeGroupId=org.beardedgeeks \ -DarchetypeArtifactId=gae-eclipse-maven-archetype \ -DarchetypeVersion=1.1.2 \ -DarchetypeRepository=http://beardedgeeks.googlecode.com/svn/repository/releases ERROR: + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Internal error in the plugin manager getting plugin 'org.apache.maven.plugins:maven-archetype-plugin': Plugin 'org.apache.maven .plugins:maven-archetype-plugin:2.0-alpha-4' has an invalid descriptor: 1) Plugin's descriptor contains the wrong group ID: net.kindleit 2) Plugin's descriptor contains the wrong artifact ID: maven-gae-plugin 3) Plugin's descriptor contains the wrong version: 0.5.9 [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Internal error in the plugin manager getting plugin 'org.apache.maven.plugins: maven-archetype-plugin': Plugin 'org.apache.maven.plugins:maven-archetype-plugin:2.0-alpha-4' has an invalid descriptor: 1) Plugin's descriptor contains the wrong group ID: net.kindleit 2) Plugin's descriptor contains the wrong artifact ID: maven-gae-plugin 3) Plugin's descriptor contains the wrong version: 0.5.9 at org.apache.maven.lifecycle.DefaultLifecycleExecutor.verifyPlugin(DefaultLifecycleExecutor.java:1544) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.getMojoDescriptor(DefaultLifecycleExecutor.java:1851) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.segmentTaskListByAggregationNeeds(DefaultLifecycleExecutor.java:462) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:175) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.PluginManagerException: Plugin 'org.apache.maven.plugins:maven-archetype-plugin:2.0-alpha-4' has an invalid descriptor: 1) Plugin's descriptor contains the wrong group ID: net.kindleit 2) Plugin's descriptor contains the wrong artifact ID: maven-gae-plugin 3) Plugin's descriptor contains the wrong version: 0.5.9 at org.apache.maven.plugin.DefaultPluginManager.addPlugin(DefaultPluginManager.java:330) at org.apache.maven.plugin.DefaultPluginManager.verifyVersionedPlugin(DefaultPluginManager.java:224) at org.apache.maven.plugin.DefaultPluginManager.verifyPlugin(DefaultPluginManager.java:184) at org.apache.maven.plugin.DefaultPluginManager.loadPluginDescriptor(DefaultPluginManager.java:1642) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.verifyPlugin(DefaultLifecycleExecutor.java:1540) ... 15 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Wed Jun 09 19:45:30 CEST 2010 [INFO] Final Memory: 3M/15M [INFO] ------------------------------------------------------------------------

    Read the article

  • Silverlight 4 Overriding the DataForm Autogenerate to insert Combo Boxes bound to Converters.

    - by kmacmahon
    I've been working towards a solution for some time and could use a little help. I know I've seen an example of this before, but tonight I cannot find anything close to what I need. I have a service that provides me all my DropDownLists, either from Cache or from the DomainService. They are presented as IEnumerable, and are requested from the a repository with GetLookup(LookupId). I have created a custom attribute that I have decorated my MetaDataClass that looks something like this: [Lookup(Lookup.Products)] public Guid ProductId I have created a custom Data Form that is set to AutoGenerateFields and I am intercepting the autogenerate fields. I am checking for my CustomAttribute and that works. Given this code in my CustomDataForm (standard comments removed for brevity), what is the next step to override the field generation and place a bound combobox in its place? public class CustomDataForm : DataForm { private Dictionary<string, DataField> fields = new Dictionary<string, DataField>(); public Dictionary<string, DataField> Fields { get { return this.fields; } } protected override void OnAutoGeneratingField(DataFormAutoGeneratingFieldEventArgs e) { PropertyInfo propertyInfo = this.CurrentItem.GetType().GetProperty(e.PropertyName); foreach (Attribute attribute in propertyInfo.GetCustomAttributes(true)) { LookupFieldAttribute lookupFieldAttribute = attribute as LookupFieldAttribute; if (lookupFieldAttribute != null) { // Create a combo box. // Bind it to my Lookup IEnumerable // Set the selected item to my Field's Value // Set the binding two way } } this.fields[e.PropertyName] = e.Field; base.OnAutoGeneratingField(e); } } Any cited working examples for SL4/VS2010 would be appreciated. Thanks

    Read the article

  • Using Unit of Work design pattern / NHibernate Sessions in an MVVM WPF

    - by Echiban
    I think I am stuck in the paralysis of analysis. Please help! I currently have a project that Uses NHibernate on SQLite Implements Repository and Unit of Work pattern: http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/04/10/nhibernate-and-the-unit-of-work-pattern.aspx MVVM strategy in a WPF app Unit of Work implementation in my case supports one NHibernate session at a time. I thought at the time that this makes sense; it hides inner workings of NHibernate session from ViewModel. Now, according to Oren Eini (Ayende): http://msdn.microsoft.com/en-us/magazine/ee819139.aspx He convinces the audience that NHibernate sessions should be created / disposed when the view associated with the presenter / viewmodel is disposed. He presents issues why you don't want one session per windows app, nor do you want a session to be created / disposed per transaction. This unfortunately poses a problem because my UI can easily have 10+ view/viewmodels present in an app. He is presenting using a MVP strategy, but does his advice translate to MVVM? Does this mean that I should scrap the unit of work and have viewmodel create NHibernate sessions directly? Should a WPF app only have one working session at a time? If that is true, when should I create / dispose a NHibernate session? And I still haven't considered how NHibernate Stateless sessions fit into all this! My brain is going to explode. Please help!

    Read the article

  • How to build android cts? And how to add and run your test case?

    - by Leox
    From 2.0 the cts is freely downloadable from android's repository. But there is no documents about it. Does anyone can tell me: how to build cts? Is there a standard procedure? How to run cts? How to add customized test case? Here, share my experience. After repo sync all source, you can't directly run "make" to build all source. You will get some errors. Now, I'am trying to first build android source without cts, and then build cts alone. Also, here are some reference for run cts: http://i-miss-erin.blogspot.com/2010/05/how-to-add-test-plan-package-to-android.html www.mentby.com/chenny/how-does-cts-work-where-can-i-get-the-test-streams.html www.jxva.com/?act=blog!article&articleId=157 1st time Update @ 5-13 18:39 +8:00 I do the following steps: 1.build android source without cts (move cts out of the $SDK_ROOT). 2.build cts (move cts back). both jdk1.5 and 1.6 have the following errors: 1.The 1st time "make cts" report: "Caused by: java.io.FileNotFoundException: ...(Too many open files)" 2.The 2nd time "make cts" report: "acp: file 'out/host/linux-x86/obj/EXECUTABLES/vm-tests_intermediates/tests/data' does not exist" 3.The 3rd time "make cts" report: "/bin/bash: line 0: cd: out/host/linux-x86/obj/EXECUTABLES/vm-tests_intermediates/hostjunit_files/classes: No such file or directory" 4.The last time "make cts" report: "zip error: Nothing to do! (try: zip -q -r ../../android.core.vm-tests.jar . -i .)"

    Read the article

  • EF 4, POCO and AddOrUpdate

    - by Eric J.
    I'm trying to create a method on an EF 4 POCO repository called AddOrUpdate. The idea is that the business layer can pass in a POCO object and the persistence framework will add the object if it is new (not yet in the database), else will update the database (once SaveChanges() is called) with the new value. This is similar to some other questions I have asked about EF, but I'm only about 80% there in understanding this so please forgive partial duplication. The part I'm missing is how to update the object graph in my ObjectContext/associated ObjectSet for the passed-in business object once I have determined that the business object indeed already exists in the database (and now has been loaded thanks to TryGetObjectByKey). ApplyCurrentValues sounds sort of like what I want, but it only copies scalar values and doesn't seem intended to update the object graph in the ObjectContext/ObjectSet. Because of my particular use case I don't care about concurrency right now. public void AddOrUpdate(BO biz) { object obj; EntityKey ek = Ctx.CreateEntityKey(mySetName, biz); bool found = Ctx.TryGetObjectByKey(ek, out obj); if (found) { // How do I do what this method name implies? Biz is a parent with children. mySet.TellTheSetToUpdateThisObject(biz); } else { mySet.AddObject(biz); } Ctx.DetectChanges(); }

    Read the article

  • NHibernate.Bytecode.UnableToLoadProxyFactoryFactoryException

    - by Shane
    I have the following code set up in my Startup IDictionary properties = new Dictionary(); properties.Add("connection.driver_class", "NHibernate.Driver.SqlClientDriver"); properties.Add("dialect", "NHibernate.Dialect.MsSql2005Dialect"); properties.Add("proxyfactory.factory_class", "NNHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle"); properties.Add("connection.provider", "NHibernate.Connection.DriverConnectionProvider"); properties.Add("connection.connection_string", "Data Source=ZEUS;Initial Catalog=mydb;Persist Security Info=True;User ID=sa;Password=xxxxxxxx"); InPlaceConfigurationSource source = new InPlaceConfigurationSource(); source.Add(typeof(ActiveRecordBase), (IDictionary<string, string>) properties); Assembly asm = Assembly.Load("Repository"); Castle.ActiveRecord.ActiveRecordStarter.Initialize(asm, source); I am getting the following error: failed: NHibernate.Bytecode.UnableToLoadProxyFactoryFactoryException : Unable to load type 'NNHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle' during configuration of proxy factory class. Possible causes are: - The NHibernate.Bytecode provider assembly was not deployed. - The typeName used to initialize the 'proxyfactory.factory_class' property of the session-factory section is not well formed. I have read and read I am referecning the All the assemblies listed and I am at a total loss as what to try next. Castle.ActiveRecord.dll Castle.DynamicProxy2.dll Iesi.Collections.dll log4net.dll NHibernate.dll NHibernate.ByteCode.Castle.dll I am 100% sure the assembly is in the bin. Anyone have any ideas?

    Read the article

  • Does Subversion have an analogue to VSS's links?

    - by bta
    I am migrating a Visual SourceSafe code repository to Subversion and I am running into a problem. Here is a simplified layout of our current source code tree (in VSS): project_root\ |-libs\ |-tools\ |-arch_1\ | |-include | |-source |-arch_2\ |-include |-source My problem is in our two arch_ folders. Each arch_ folder will be built for a different hardware architecture, but the contents of the two folders are practically identical. The files in arch_2 are merely VSS links to the files in arch_1, with only a small handful of exceptions. Work is generally checked into and out of the arch_1 folder, and the VSS links make sure that any code checked in here is updated in the arch_2 folder as well. Moving to Subversion, is there anything that will behave like VSS's links? That is, is there a way to have two files in separate folders magically associated with one another such that they will always be in sync with each other (changes to one will affect the other as well)? Note: I know the correct answer here is to fix the build system. The build system on this project was pieced together roughly a decade ago, back when our compiler/build system wasn't intelligent enough to compile the same folder full of source code for two different architectures. Thanks to make and updated compilers, we can re-write the build system to eliminate this dependency on two parallel source folders. However, this will take time that we don't have at the moment (we are losing our license to our VSS server and are being forced to migrate on rather short notice). I am hoping to find a Subversion solution to this problem because at the moment, our time would be much better spent making the migration run smoothly than re-writing the build system (which is next on my to-do list!). Thank you for your help!

    Read the article

  • C# BindingSource.AddingNew is never called?

    - by msfanboy
    Hello, BindingSource.AddingNew is never called when I leave the cell of my datagrid. The DataGrid has as datasource the BindingSource which again has a "List" of "Customer". What does the BindingSource need to create a new Customer object and add it to the underlying ICustomerList ? Of course a interface has no constructor... Thats the Exception I get: System.MissingMethodException: The constcructor for the type "SAT.EnCoDe.Administration.ICustomer" was not found. bei System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes) bei System.SecurityUtils.SecureCreateInstance(Type type, Object[] args) bei System.ComponentModel.BindingList1.AddNewCore() bei System.ComponentModel.BindingList1.System.ComponentModel.IBindingList.AddNew() bei System.Windows.Forms.BindingSource.AddNew() bei System.Windows.Forms.CurrencyManager.AddNew() bei DevExpress.Data.CurrencyDataController.OnCurrencyManagerAddNew() bei DevExpress.Data.CurrencyDataController.AddNewRow() bei DevExpress.XtraGrid.Views.Grid.GridView.OnActiveEditor_ValueModified(Object sender, EventArgs e) bei DevExpress.XtraEditors.Repository.RepositoryItem.RaiseModified(EventArgs e) bei DevExpress.XtraEditors.BaseEdit.OnEditValueChanging(ChangingEventArgs e) bei DevExpress.XtraEditors.TextEdit.OnMaskBox_ValueChanged(Object sender, EventArgs e) bei DevExpress.XtraEditors.Mask.MaskBox.RaiseEditTextChanged() bei System.Windows.Forms.TextBoxBase.WmReflectCommand(Message& m) bei DevExpress.XtraEditors.Mask.MaskBox.BaseWndProc(Message& m) bei DevExpress.XtraEditors.Mask.MaskBox.WndProc(Message& m) bei DevExpress.XtraEditors.TextBoxMaskBox.WndProc(Message& msg) bei System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) bei System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

    Read the article

  • Why is jQuery so widely adopted versus other Javascript frameworks?

    - by Andrew Moore
    I manage a group of programmers. I do value my employees opinion but lately we've been divided as to which framework to use on web projects. I personally favor MooTools, but some of my team seems to want to migrate to jQuery because it is more widely adopted. That by itself is not enough for me to allow a migration. I have used both jQuery and MooTools. This particular essay tends to reflect how I feel about both frameworks. jQuery is great for DOM Manipulation, but seem to be limited to helping you do that. Feature wise, both jQuery and MooTools allow for easy DOM Selection and Manipulation: // jQuery $('#someContainer div[class~=dialog]') .css('border', '2px solid red') .addClass('critical'); // MooTools $('#someContainer div[class~=dialog]') .setStyle('border', '2px solid red') .addClass('critical'); Both jQuery and MooTools allow for easy AJAX: // jQuery $('#someContainer div[class~=dialog]') .load('/DialogContent.html'); // MooTools (Using shorthand notation, you can also use Request.HTML) $('#someContainer div[class~=dialog]') .load('/DialogContent.html'); Both jQuery and MooTools allow for easy DOM Animation: // jQuery $('#someContainer div[class~=dialog]') .animate({opacity: 1}, 500); // MooTools (Using shorthand notation, you can also use Fx.Tween). $('#someContainer div[class~=dialog]') .set('tween', {duration: 500}) .tween('opacity', 1); jQuery offers the following extras: Large community of supporters Plugin Repository Integration with Microsoft's ASP.NET and VisualStudio Used by Microsoft, Google and others MooTools offers the following extras: Object Oriented Framework with Classic OOP emulation for JS Extended native objects Higher consistency between browsers for native functions support. More easy code reuse Used by The World Wide Web Consortium, Palm and others. Given that, it seems that MooTools does everything jQuery does and more (some things I cannot do in jQuery and I can in MooTools) but jQuery has a smaller learning curve. So the question is, why did you or your team choose jQuery over another JavaScript framework? Note: While I know and admit jQuery is a great framework, there are other options around and I'm trying to take a decision as to why jQuery should be our choice versus what we use right now (MooTools)?

    Read the article

  • Failed sending bytes array to JAX-WS web service on Axis

    - by user304309
    Hi I have made a small example to show my problem. Here is my web-service: package service; import javax.jws.WebMethod; import javax.jws.WebService; @WebService public class BytesService { @WebMethod public String redirectString(String string){ return string+" - is what you sended"; } @WebMethod public byte[] redirectBytes(byte[] bytes) { System.out.println("### redirectBytes"); System.out.println("### bytes lenght:" + bytes.length); System.out.println("### message" + new String(bytes)); return bytes; } @WebMethod public byte[] genBytes() { byte[] bytes = "Hello".getBytes(); return bytes; } } I pack it in jar file and store in "axis2-1.5.1/repository/servicejars" folder. Then I generate client Proxy using Eclipse for EE default utils. And use it in my code in the following way: BytesService service = new BytesServiceProxy(); System.out.println("Redirect string"); System.out.println(service.redirectString("Hello")); System.out.println("Redirect bytes"); byte[] param = { (byte)21, (byte)22, (byte)23 }; System.out.println(param.length); param = service.redirectBytes(param); System.out.println(param.length); System.out.println("Gen bytes"); param = service.genBytes(); System.out.println(param.length); And here is what my client prints: Redirect string Hello - is what you sended Redirect bytes 3 0 Gen bytes 5 And on server I have: ### redirectBytes ### bytes lenght:0 ### message So byte array can normally be transfered from service, but is not accepted from the client. And it works fine with strings. Now I use Base64Encoder, but I dislike this solution.

    Read the article

  • Jboss Error-Cannot process metadata

    - by Nila
    Hi! I'm trying to implement stateless session bean ejb3 in jboss5 using netbeans6.8 as a editor. When I tried deploying my application, I'm getting the following error. What is the issue with this? 17:45:04,901 ERROR [AbstractKernelController] Error installing to PostClassLoader: name=vfszip:/E:/Shalini/jboss-5.1.0.GA/server/default/deploy/InsighIT1.1-ejb.jar/ state=ClassLoader mode=Manual requiredState=PostClassLoader org.jboss.deployers.spi.DeploymentException: Cannot process metadata at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:49) at org.jboss.deployment.AnnotationMetaDataDeployer.deploy(AnnotationMetaDataDeployer.java:181) at org.jboss.deployment.AnnotationMetaDataDeployer.deploy(AnnotationMetaDataDeployer.java:93) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1210) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.scan(HDScanner.java:362) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.run(HDScanner.java:255) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.ClassNotFoundException: tomcat.Main from BaseClassLoader@1d6d136{VFSClassLoaderPolicy@41312b{name=vfszip:/E:/hh/jboss-5.1.0.GA/server/default/deploy/InsighIT1.1-ejb.jar/

    Read the article

  • Coding a parser for a domain specific language in Java

    - by Bruno Rothgiesser
    We want to design a simple domain specific language for writing test scripts to automatically test a XML-based interface of one of our applications. A sample test would be: Get an input XML file from network shared folder or subversion repository Import the XML file using the interface Check if the import result message was successfull Export the XML corresponding to the object that was just imported using the interface and check if it correct. If the domain specific language can be declarative and its statements look as close as my sentences in the sample above as possible, it will be awesome because people won't necessarily have to be programmers to understand/write/maintain the tests. Something like: newObject = GET FILE "http://svn/repos/template1.xml" reponseMessage = IMPORT newObject newObjectID = GET PROPERTY '/object/id/' FROM responseMessage (..) But then I'm not sure how to implement a simple parser for that languange in Java. Back in school, 10 years ago, I coded a language parser using Lex and Yacc for the C language. Maybe an approach would be to use some equivalent for Java? Or, I could give up the idea of having a declarative language and choose an XML-based language instead, which would possibly be easier to create a parser for? What approach would you recommend?

    Read the article

  • C# Design How to Elegantly wrap a DAL class

    - by guazz
    I have an application which uses MyGeneration's dOODads ORM to generate it's Data Access Layer. dOODad works by generating a persistance class for each table in the database. It works like so: // Load and Save Employees emps = new Employees(); if(emps.LoadByPrimaryKey(42)) { emps.LastName = "Just Got Married"; emps.Save(); } // Add a new record Employees emps = new Employees(); emps.AddNew(); emps.FirstName = "Mr."; emps.LastName = "dOOdad"; emps.Save(); // After save the identity column is already here for me. int i = emps.EmployeeID; // Dynamic Query - All Employees with 'A' in thier last name Employees emps = new Employees(); emps.Where.LastName.Value = "%A%"; emps.Where.LastName.Operator = WhereParameter.Operand.Like; emps.Query.Load(); For the above example(i.e. Employees DAL object) I would like to know what is the best method/technique to abstract some of the implementation details on my classes. I don't believe that an Employee class should have Employees(the DAL) specifics in its methods - or perhaps this is acceptable? Is it possible to implement some form of repository pattern? Bear in mind that this is a high volume, perfomacne critical application. Thanks, j

    Read the article

  • Unit Testing in Entity Framework 4 - using CreateSourceQuery

    - by Adam
    There are many great tutorials on abstracting your EF4 context so that it can be tested against (without involving a DB). Two great (and similar) examples are here: http://blogs.msdn.com/b/adonet/archive/2009/12/17/walkthrough-test-driven-development-with-the-entity-framework-4-0.aspx (oops, not enough rep. points to post second URL) basically you wind up querying your repository using linq-to-objects while testing, and linq-to-entities while running, and usually they behave the same, but when you start hitting more advanced functionality, problems arise. Here's the question. When using linq-to-objects against IObjectSet (ie, unit testing), CreateSourceQuery returns null, which will probably cause your entire query to crash and burn. ie O = db.Orders.First(); O.OrderItems.CreateSourceQuery().ToList(); Is there a way to get CreateSourceQuery to just return the underlying collection, rather than null when working with collections? Unfortunately EntityCollection is sealed, and so cannot be mocked. This isn't really the end or the world if EF4 won't let you abstract things to this level, I just wanted to make sure there wasn't something I was missing.

    Read the article

  • Nhibernate Guid with PK MySQL

    - by Andrew Kalashnikov
    Hello colleagues. I've got a question. I use NHibernate with MySql. At my entities I use Id(PK) for my business-logic usage and Guid(for replication). So my BaseDomain: public class BaseDomain { public virtual int Id { get; set; } public virtual Guid Guid { get; set; } public class Properties { public const string Id = "Id"; public const string Guid = "Guid"; } public BaseDomain() { } } My usage domain: public class ActivityCategory : BaseDomain { public ActivityCategory() { } public virtual string Name { get; set; } public new class Properties { public const string Id = "Id"; public const string Guid = "Guid"; public const string Name = "Name"; private Properties() { } } } Mapping: <class name="ActivityCategory, Clients.Core" table='Activity_category'> <id name="Id" unsaved-value="0" type="int"> <column name="Id" not-null="true"/> <generator class="native"/> </id> <property name="Guid"/> <property name="Name"/> </class> But when I insert my entity: [Test] public void Test() { ActivityCategory ac = new ActivityCategory(); ac.Name = "Test"; using (var repo = new Repository<ActivityCategory>()) repo.Save(ac); } I always get '00000000-0000-0000-0000-000000000000' at my Guid field. What should I do for generate right Guid. May be mapping? Thanks a lot!

    Read the article

  • Yet another Subversion "Commit failed" MERGE of 'blabla': 200 OK

    - by marty3d
    Hi! I get the infamous "MERGE of 'whatever': 200 OK" whenever I try to commit using a post-commit hook on Windows (running the repository and Trac locally), and I'm going crazy. I've been looking all over for a day now, without finding any solutions. So here's how it's set up and what I've tried so far: Settings: Windows 7 (64-bit) VisualSVN Server TortoiseSVN Trac 0.11.6 I'm using the three standard scripts for post-commit on Windows. Everything works when I run post-commit.cmd from the command prompt with repo and changesetnumber as parameters. After extensive trouble-shooting, I found that if I remove the last line in trac-post-commit.cmd, Python "%~dp0\trac-post-commit-hook.py" -p "%TRAC_ENV%" -r "%REV%" -u "%AUTHOR%" -m "%LOG%", the Commit failed error goes away. Adding 1/0 (generating a division by zero error) in the python script doesn't show anything different. From the command prompt I get an error, though. Removing all code in the python script also makes the commit failed go away, so I guess the culprit is in trac-post-commit-hook.py. Perhaps if I could send the actual error to a log file, I could dig a little deeper, but I'm not sure how. post-commit.cmd: call %~dp0\trac-post-commit-hook.cmd %1 %2 trac-post-commit-hook.cmd: http://trac.edgewall.org/browser/trunk/contrib/trac-post-commit-hook?rev=920 Thank you so much, it would mean alot if someone could assist a little here! /Martin

    Read the article

  • Maven deploy:deploy-file not found due to version/timestamp appended to jar

    - by JamesC
    I'm having a problem using deploy:deploy-file with snapshots I'd like some advice on please. I have 2 projects; 1) Ant based and 2) the other Maven based that consumes the jars of the other project via Archiva. I've added a target to the Ant project to deploy snapshots on every successful build during our iteration. The problem is the Maven project cannot find them because the name of the dependency has a timestamp appended like so: someJar-1.0-20100407.171211-1.jar Here is the Ant target: <exec executable="${maven.bin}" dir="../lib"> <arg value="deploy:deploy-file" /> <arg value="-DgroupId=com.my.package" /><arg value="-DartifactId=${ant.project.name}" /> <arg value="-Dversion=${manifest.implementation.version}-SNAPSHOT" /> <arg value="-Dpackaging=jar" /> <arg value="-Dfile=../lib/${ant.project.name}-${manifest.implementation.version}-SNAPSHOT.jar" /> <arg value="-Durl=http://archiva.xxx.com/archiva/repository/snapshots" /> <arg value="-DrepositoryId=snapshots" /> </exec> I have a similar Ant target for releases and this works fine. Other pure Maven projects which deploy snapshosts via mvn deploy work fine. Does anyone know where I am going wrong? Thank You Update Figured out the answer, see below.

    Read the article

  • Migrating complex SVN branch hierarchy to Mercurial

    - by Christian Hang
    Our team has been using SVN for managing an application of decent size and over time a rather complex hierarchy of branches and tags has built up, which is following the basic standard layout for SVN repositories, but is more nested: |-trunk |-branches | |-releases | | |-releaseA | | `-releaseB | `-features | |-featureX | `-featureY |-tags |-releaseA | |-beta | `-RTP `-releaseB |-beta `-RTP (The feature branches are obviously temporary branches but we have to take them into consideration as it won't be feasible to close all of them at once in the near future) For several reasons but primarily because merges have been becoming an increasing pain, we are considering to switch to Mercurial. The main problem we are currently facing is migrating the existing code base without losing our history. I've tried several migration tools (e.g., yasvn2hg, hg convert and svn2hg) with yasvn2hg being the most promising, but none of them seem to be able to deal with nested hierarchies but they all assume that branches and tags are organized in one flat directory respectively. The choice between named branches or clones as the conversion target of old SVN branches is not a limiting factor in this case, as either solution would be appreciated. We are currently experimenting with both options and how they would fit into our current processes but haven't decided on one yet. I'd obviously be interested in recommendations or experiences with similar setups concerning that issue as well. So, what is the best way to convert a nested SVN branch hierarchy like this to Mercurial? Converting one branch at a time into a separate repository would be quite annoying and I am not sure if that would be the right approach in the first place, depending on how the tools handle historic merges and need to be aware of all other branches?

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >