Search Results

Search found 4805 results on 193 pages for 'repository'.

Page 2/193 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Architecture Best Practice (MVC): Repository Returns Object & Object Member Accessed Directly or Repository Returns Object Member

    - by coderabbi
    Architecturally speaking, which is the preferable approach (and why)? $validation_date = $users_repository->getUser($user_id)->validation_date; Seems to violate Law of Demeter by accessing member of object returned by method call Seems to violate Encapsulation by accessing object member directly $validation_date = $users_repository->getUserValidationDate($user_id); Seems to violate Single Responsibility Principle as $users_repository no longer just returns User objects

    Read the article

  • Unavailable repository

    - by katrina
    I am new to Ubuntu and keep butting up against errors, such as this: Package libpng12-dev is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: libpng12-0 E: Unable to locate package subversion E: Package 'git-core' has no installation candidate E: Package 'build-essential' has no installation candidate E: Package 'autoconf' has no installation candidate E: Package 'libtool' has no installation candidate E: Unable to locate package libxml2-dev E: Unable to locate package libgeos-dev E: Unable to locate package libpq-dev E: Unable to locate package libbz2-dev E: Package 'proj' has no installation candidate E: Unable to locate package munin-node E: Unable to locate package munin E: Unable to locate package libprotobuf-c0-dev E: Unable to locate package protobuf-c-compiler E: Unable to locate package libfreetype6-dev E: Package 'libpng12-dev' has no installation candidate E: Unable to locate package libtiff4-dev E: Unable to locate package libicu-dev E: Unable to locate package libboost-all-dev E: Unable to locate package libgdal-dev E: Unable to locate package libcairo-dev E: Unable to locate package libcairomm-1.0-dev E: Couldn't find any package by regex 'libcairomm-1.0-dev' E: Unable to locate package apache2 E: Unable to locate package apache2-dev E: Unable to locate package libagg-dev when I want to do this: sudo apt-get install subversion git-core tar unzip wget bzip2 build-essential autoconf libtool libxml2-dev libgeos-dev libpq-dev libbz2-dev proj munin-node munin libprotobuf-c0-dev protobuf-c-compiler libfreetype6-dev libpng12-dev libtiff4-dev libicu-dev libboost-all-dev libgdal-dev libcairo-dev libcairomm-1.0-dev apache2 apache2-dev libagg-dev. Any help or advice would be greatly appreciated. Or referrals to other questions...

    Read the article

  • DDD Model Design and Repository Persistence Performance Considerations

    - by agarhy
    So I have been reading about DDD for some time and trying to figure out the best approach on several issues. I tend to agree that I should design my model in a persistent agnostic manner. And that repositories should load and persist my models in valid states. But are these approaches realistic practically? I mean its normal for a model to hold a reference to a collection of another type. Persisting that model should mean persist the entire collection. Fine. But do I really need to load the entire collection every time I load the model? Probably not. So I can have specialized repositories. Some that load maybe a subset of the object graph via DTOs and others that load the entire object graph. But when do I use which? If I have DTOs, what's stopping client code from directly calling them and completely bypassing the model? I can have mappers and factories to create my models from DTOs maybe? But depending on the design of my models that might not always work. Or it might not allow my models to be created in a valid state. What's the correct approach here?

    Read the article

  • need help connecting to bitbucket repository with sourceTree on windows 8

    - by pythonian29033
    I'm having trouble adding and cloning my repo on bitbucket to the sourceTree app, we're only starting with this now and we're a small company, so there's not much knowledge around this. now I've gone through The documentation on sourceTree for help, but I've noticed when I select my repo on bitbucket, it uses the repo url I select and appends a .git at the end. Then a notice message says This is not a valid source path / URL, but when I click Details... I get a dialogBox with nothing in it and an ok button. and when I'm done entering the details the 'Clone' button remains disabled. Is this Windows 8 or am I actually doing something wrong? Now I usually use ubuntu, but we just got these new ASUS ultrabooks at work and it's a pain to install any linux Distro on here. So I'm stuck with windows 8

    Read the article

  • Failed to download repository information

    - by Bob Van Elst
    When i clicked check this error message came up. But it does not come up when update manager strarts automatically. When you open update manager this error comes up. Any ideas on how to fix it? Details: *W:GPG error: (http://ppa.launchpad.net precise Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FC8CA6FE7B1FEC7C, W:Failed to fetch (http://ppa.launchpad.net/jonabeck/ppa/ubuntu/dists/precise/main/binary-amd64/Packages 404 Not Found , W:Failed to fetch (http://ppa.launchpad.net/jonabeck/ppa/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found , W:Failed to fetch (http://ppa.launchpad.net/jonabeck/ppa/ubuntu/dists/precide/main/source/Sources 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead.*

    Read the article

  • How do I create stand alone packages from ubuntu repository

    - by tachyons
    Is it possible to create a stand alone deb packages by merging dependancies without manual repacking.?? I've looked at this question but it doesn't really answer what I'm trying to achieve above. if possible how to do it? update 1 No tools available yet(?) So what about creating new deb package containing all packages which will copy dependencies to the cache and executing main package. Is it possible? update 2 Above method appears to be impossible because dpkg cant handle more than one operation at a time. Some scripts may can do it update 3 this tool is very helpful but it currently wont support oneric and above still waiting for more generic tool Thanks in advance

    Read the article

  • How can I write a clean Repository without exposing IQueryable to the rest of my application?

    - by Simucal
    So, I've read all the Q&A's here on SO regarding the subject of whether or not to expose IQueryable to the rest of your project or not (see here, and here), and I've ultimately decided that I don't want to expose IQueryable to anything but my Model. Because IQueryable is tied to certain persistence implementations I don't like the idea of locking myself into this. Similarly, I'm not sure how good I feel about classes further down the call chain modifying the actual query that aren't in the repository. So, does anyone have any suggestions for how to write a clean and concise Repository without doing this? One problem I see, is my Repository will blow up from a ton of methods for various things I need to filter my query off of. Having a bunch of: IEnumerable GetProductsSinceDate(DateTime date); IEnumberable GetProductsByName(string name); IEnumberable GetProductsByID(int ID); If I was allowing IQueryable to be passed around I could easily have a generic repository that looked like: public interface IRepository<T> where T : class { T GetById(int id); IQueryable<T> GetAll(); void InsertOnSubmit(T entity); void DeleteOnSubmit(T entity); void SubmitChanges(); } However, if you aren't using IQueryable then methods like GetAll() aren't really practical since lazy evaluation won't be taking place down the line. I don't want to return 10,000 records only to use 10 of them later. What is the answer here? In Conery's MVC Storefront he created another layer called the "Service" layer which received IQueryable results from the respository and was responsible for applying various filters. Is this what I should do, or something similar? Have my repository return IQueryable but restrict access to it by hiding it behind a bunch of filter classes like GetProductByName, which will return a concrete type like IList or IEnumerable?

    Read the article

  • Saving complex aggregates using Repository Pattern

    - by Kevin Lawrence
    We have a complex aggregate (sensitive names obfuscated for confidentiality reasons). The root, R, is composed of collections of Ms, As, Cs, Ss. Ms have collections of other low-level details. etc etc R really is an aggregate (no fair suggesting we split it!) We use lazy loading to retrieve the details. No problem there. But we are struggling a little with how to save such a complex aggregate. From the caller's point of view: r = repository.find(id); r.Ps.add(factory.createP()); r.Cs[5].updateX(123); r.Ms.removeAt(5); repository.save(r); Our competing solutions are: Dirty flags Each entity in the aggregate in the aggregate has a dirty flag. The save() method in the repository walks the tree looking for dirty objects and saves them. Deletes and adds are a little trickier - especially with lazy-loading - but doable. Event listener accumulates changes. Repository subscribes a listener to changes and accumulates events. When save is called, the repository grabs all the change events and writes them to the DB. Give up on repository pattern. Implement overloaded save methods to save the parts of the aggregate separately. The original example would become: r = repository.find(id); r.Ps.add(factory.createP()); r.Cs[5].updateX(123); r.Ms.removeAt(5); repository.save(r.Ps); repository.save(r.Cs); repository.save(r.Ms); (or worse) Advice please! What should we do?

    Read the article

  • NHibernate does not update entity when repository is passed by constructor

    - by Alex
    Hi everybody, I am developing with NHibernate for the first time in conjunction with ASP.NET MVC and StructureMap. The CodeCampServer serves as a great example for me. I really like the different concepts which were implemented there and I can learn a lot from it. In my controllers I use Constructur Dependency Injection to get an instance of the specific repository needed. My problem is: If I change an attribute of the customer the customer's data is not updated in the database, although Commit() is called on the transaction object (by a HttpModule). public class AccountsController : Controller { private readonly ICustomerRepository repository; public AccountsController(ICustomerRepository repository) { this.repository = repository; } public ActionResult Save(Customer customer) { Customer customerToUpdate = repository .GetById(customer.Id); customerToUpdate.GivenName = "test"; //<-- customer does not get updated in database return View(); } } On the other hand this is working: public class AccountsController : Controller { [LoadCurrentCustomer] public ActionResult Save(Customer customer) { customer.GivenName = "test"; //<-- Customer gets updated return View(); } } public class LoadCurrentCustomer : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { const string parameterName = "Customer"; if (filterContext.ActionParameters.ContainsKey(parameterName)) { if (filterContext.HttpContext.User.Identity.IsAuthenticated) { Customer CurrentCustomer = DependencyResolverFactory .GetDefault() .Resolve<IUserSession>() .GetCurrentUser(); filterContext.ActionParameters[parameterName] = CurrentCustomer; } } base.OnActionExecuting(filterContext); } } public class UserSession : IUserSession { private readonly ICustomerRepository repository; public UserSession(ICustomerRepository customerRepository) { repository = customerRepository; } public Customer GetCurrentUser() { var identity = HttpContext.Current.User.Identity; if (!identity.IsAuthenticated) { return null; } Customer customer = repository.GetByEmailAddress(identity.Name); return customer; } } I also tried to call update on the repository like the following code shows. But this leads to an NHibernateException which says "Illegal attempt to associate a collection with two open sessions". Actually there is only one. public ActionResult Save(Customer customer) { Customer customerToUpdate = repository .GetById(customer.Id); customer.GivenName = "test"; repository.Update(customerToUpdate); return View(); } Does somebody have an idea why the customer is not updated in the first example but is updated in the second example? Why does NHibernate say that there are two open sessions?

    Read the article

  • Atlassian Crucible very slow on large repository

    - by Mitch Lindgren
    Hi everyone, My company has been running a trial of Atlassian Crucible for some months now. For repositories where it's working properly, users have given very positive feedback about the tool. The problem I'm having is that we have several different projects, each with its own repository, and some of those repositories are very large. One repository in particular has a large number of branches and probably around 9,000 files per branch. Browsing that repository in Crucible is extremely slow. Crucible is running on a CentOS VM. The VM has 4GB of RAM, and I've set Crucible's maximum at 3GB, of which it is currently using 2GB. I've brought this up in a support ticket with Atlassian, and they suggested the following: In particular because you have a rather large SVN repository you will likely find that Fisheye will be creating a large index file on disk. To help improve performance a few things you can try are: Increasing the available memory available to Fisheye (see the document above). Migrating to an external database: confluence.atlassian.com/display/FISHEYE/Migrating+to+an+External+Database Excluding files and directories from your index that aren't needed: confluence.atlassian.com/display/FISHEYE/Allow+(Process) (Sorry for not hyperlinking; don't have the rep.) I've tried all of these things to an extent, but so far none have helped greatly. I was originally running Crucible on a Windows box with 2GB of RAM using the built in HSQL DB. Moving to MySQL on CentOS saw a performance increase for some repositories, and made Crucible much more stable, but did not seem to help much with our biggest repository. There are only so many files/branches I can exclude from indexing while maintaining the tool's usefulness. That being the case, does anyone have any tips on how to speed up Crucible on large repositories, without investing in insanely powerful hardware? Thanks! Edit: To clarify, since I didn't mention it explicitly above, I am using FishEye.

    Read the article

  • How to set up a staging apt repository to securely manage upgrades

    - by andreash
    Hello, I would like to be able to run automatic apt-get upgrade (once per hour) on our servers (Ubuntu 10.04), so that I don't have to do it manually on all of them (about 15). However, for production machines, that's not a good idea ... So here's my idea: Set up a local repository for all 'approved' updates for critical packages. I would then push updated packages from upstream to our local repo after I tested them, and all servers could automatically (apt-cron?) upgrade from this repository. So my question is this: How do I configure apt on the clients so that they use the local repository only for all packages which exist on the local repository, and the upstream one for all other packages? Does this actually make sense? Or am I missing something? Anyways, thanks for your insight! Andreas.

    Read the article

  • recreating svn repository

    - by user17183
    after a major server fault, svn repository was destroyed and my working version is most current one, what is the way to recreate svn repository from my working version? after installing svn on a new server and trying at my working copy svn switch NEW_SVN_PATH . i get an error Repository UUID '1c604742-6b16-462b-86e4-cc8bce959242' doesn't match expected UUID '6df69aeb-a72c-450d-8102-24036a3855f7'

    Read the article

  • Git Clone from SSH Repository

    - by Mike Silvis
    I used to be able to clone from my personal git repository but now i seem to be running into an error. user:dev.site.com mikesilvis$ git clone { my ssh directory } server@ipaddress's password: remote: Counting objects: 3622, done. remote: Compressing objects: 100% (2718/2718), done. error: git upload-pack: git-pack-objects died with error. fatal: git upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed It seems to be working however while I push files to the repository.

    Read the article

  • ODI 11g – Scripting Repository Creation

    - by David Allan
    Here’s a quick post on how to create both master and work repositories in one simple dialog, its using the groovy capabilities in ODI 11g and the groovy swing builder components. So if you want more/less take the groovy script and change, its easy stuff. The groovy script odi_create_repos.groovy is here, just open it in ODI before connecting and you will be able to create both master and work repositories with ease – or check the groovy out and script your own automation – you can construct the master, work and runtime repositories, so if you are embedding ODI as your DI engine this may be very useful. When you click ‘Create Repository’ you will see the following in the log as the master repository starts to be created; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... Then the completion message followed by the work repository creation and final completion message. Master Repository Creation Completed. Work Repository Creation Started. Work Repository Creation Completed. ====================================================== Repository Creation Completed Successfully ====================================================== Script exited. If any error is hit, the script just exits and prints any error to the log. For example if I enter no passwords, I will get this error; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... ====================================================== Repository Creation Complete in Error ====================================================== oracle.odi.setup.RepositorySetupException: oracle.odi.core.security.PasswordPolicyNotMatchedException: ODI-10189: Password policy MinPasswordLength is not matched. ====================================================== Script exited. This is another example of using the ODI 11g SDK showing how to automate the construction of your data integration environment. The main interfaces and classes used here are IMasterRepositorySetup / MasterRepositorySetupImpl and IWorkRepositorySetup / WorkRepositorySetupImpl.

    Read the article

  • ODI 11g – Scripting Repository Creation

    - by David Allan
    Here’s a quick post on how to create both master and work repositories in one simple dialog, its using the groovy capabilities in ODI 11g and the groovy swing builder components. So if you want more/less take the groovy script and change, its easy stuff. The groovy script odi_create_repos.groovy is here, just open it in ODI before connecting and you will be able to create both master and work repositories with ease – or check the groovy out and script your own automation – you can construct the master, work and runtime repositories, so if you are embedding ODI as your DI engine this may be very useful. When you click ‘Create Repository’ you will see the following in the log as the master repository starts to be created; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... Then the completion message followed by the work repository creation and final completion message. Master Repository Creation Completed. Work Repository Creation Started. Work Repository Creation Completed. ====================================================== Repository Creation Completed Successfully ====================================================== Script exited. If any error is hit, the script just exits and prints any error to the log. For example if I enter no passwords, I will get this error; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... ====================================================== Repository Creation Complete in Error ====================================================== oracle.odi.setup.RepositorySetupException: oracle.odi.core.security.PasswordPolicyNotMatchedException: ODI-10189: Password policy MinPasswordLength is not matched. ====================================================== Script exited. This is another example of using the ODI 11g SDK showing how to automate the construction of your data integration environment. The main interfaces and classes used here are IMasterRepositorySetup / MasterRepositorySetupImpl and IWorkRepositorySetup / WorkRepositorySetupImpl.

    Read the article

  • Any way to skip the huge apt-cache-update everytime a repository is added?

    - by Nirmik
    I keep adding atleast one repository per day.! I like new stuff and to personalise my Ubuntu. But now evertime I add a repository,the apt cache needs to be updated and its almost a 10-15min update everyday.! Moreover its a lot data consuming. Is there any way or workaround so that the repository is updated in the apt-cache without updating other stuff? Also,sometimes i do not wish to install some updates at the given point of time due to my bandwidth limitations.But then if i have to add a repository,the updates are all installed in the sudo apt-get update command.. Would like any help. The bandwidth limit is actually a major issue for me.! Thanx :)

    Read the article

  • Problem resolving a generic Repository with Entity Framework and Castle Windsor Container

    - by user368776
    Hi, im working in a generic repository implementarion with EF v4, the repository must be resolved by Windsor Container. First the interface public interface IRepository<T> { void Add(T entity); void Delete(T entity); T Find(int key) } Then a concrete class implements the interface public class Repository<T> : IRepository<T> where T: class { private IObjectSet<T> _objectSet; } So i need _objectSet to do stuff like this in the previous class public void Add(T entity) { _objectSet.AddObject(entity); } And now the problem, as you can see im using a EF interface like IObjectSet to do the work, but this type requires a constraint for the T generic type "where T: class". That constrait is causing an exception when Windsor tries to resolve its concrete type. Windsor configuration look like this. <castle> <components> <component id="LVRepository" service="Repository.Infraestructure.IRepository`1, Repository" type="Repository.Infraestructure.Repository`1, Repository" lifestyle="transient"> </component> </components> The container resolve code IRepository<Product> productsRep =_container.Resolve<IRepository<Product>>(); Now the exception im gettin System.ArgumentException: GenericArguments[0], 'T', on 'Repository.Infraestructure.Repository`1[T]' violates the constraint of type 'T'. ---> System.TypeLoadException: GenericArguments[0], 'T', on 'Repository.Infraestructure.Repository`1[T]' violates the constraint of type parameter 'T'. If i remove the constraint in the concrete class and the depedency on IObjectSet (if i dont do it get a compile error) everything works FINE, so i dont think is a container issue, but IObjectSet is a MUST in the implementation. Some help with this, please.

    Read the article

  • Multi-tenant Access Control: Repository or Service layer?

    - by FreshCode
    In a multi-tenant ASP.NET MVC application based on Rob Conery's MVC Storefront, should I be filtering the tenant's data in the repository or the service layer? 1. Filter tenant's data in the repository: public interface IJobRepository { IQueryable<Job> GetJobs(short tenantId); } 2. Let the service filter the repository data by tenant: public interface IJobService { IList<Job> GetJobs(short tenantId); } My gut-feeling says to do it in the service layer (option 2), but it could be argued that each tenant should in essence have their own "virtual repository," (option 1) where this responsibility lies with the repository. Which is the most elegant approach: option 1, option 2 or is there a better way? Update: I tried the proposed idea of filtering at the repository, but the problem is that my application provides the tenant context (via sub-domain) and only interacts with the service layer. Passing the context all the way to the repository layer is a mission. So instead I have opted to filter my data at the service layer. I feel that the repository should represent all data physically available in the repository with appropriate filters for retrieving tenant-specific data, to be used by the service layer. Final Update: I ended up abandoning this approach due to the unnecessary complexities. See my answer below.

    Read the article

  • Maven doesn't see my <repository> in <dependencyManagement>

    - by Ondra Žižka
    To make Maven "deploy" to a directory, I use this: <distributionManagement> <downloadUrl>http://code.google.com/p/junitdiff/downloads/list</downloadUrl> <repository> <id>local-hack-repo</id> <name>LocalDir</name> <url>file://${project.basedir}/dist-maven</url> </repository> <snapshotRepository> <id>jboss-snapshots-repository</id> <name>JBoss Snapshots Repository</name> <!-- <url>https://repository.jboss.org/nexus/content/repositories/snapshots</url> --> <url>file://${project.basedir}/dist-maven</url> </snapshotRepository> </distributionManagement> This appears in the efffective pom. ... <distributionManagement> <repository> <id>local-hack-repo</id> <name>LocalDir</name> <url>file:///home/ondra/work/TOOLS/JUnitDiff/github/dist-maven</url> </repository> <snapshotRepository> <id>jboss-snapshots-repository</id> <name>JBoss Snapshots Repository</name> <url>file:///home/ondra/work/TOOLS/JUnitDiff/github/dist-maven</url> </snapshotRepository> <downloadUrl>http://code.google.com/p/junitdiff/downloads/list</downloadUrl> </distributionManagement> But still, Maven insists that it's not there: [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project JUnitDiff: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter -> [Help 1] [INFO] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project JUnitDiff: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217) [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) [INFO] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) [INFO] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) [INFO] at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) [INFO] at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) [INFO] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) [INFO] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) [INFO] at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) [INFO] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) [INFO] at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) [INFO] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [INFO] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [INFO] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [INFO] at java.lang.reflect.Method.invoke(Method.java:601) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) [INFO] Caused by: org.apache.maven.plugin.MojoExecutionException: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter [INFO] at org.apache.maven.plugin.deploy.DeployMojo.getDeploymentRepository(DeployMojo.java:235) [INFO] at org.apache.maven.plugin.deploy.DeployMojo.execute(DeployMojo.java:118) [INFO] at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) [INFO] ... 19 more I am using it through the maven-release-plugin. What's wrong?

    Read the article

  • Assign highest priority to my local repository

    - by Anwar Shah
    Original question was : "How to assign highest priority to local repository without using sources.list file" I have setup a local repository with packages I downloaded. I use it to avoid downloading the same packages over the Internet, when I need to reinstall my Ubuntu. It is a basic repository, created with apt-ftparchive packages . > Packages. I made this a trusted repository to avoid "unauthenticated repository" warning. (When you have a untrusted repository, apt or synaptic try to download the same packages over the Internet, 'cause it is trusted). I have been using this local repository for at least 1 years. But I have to always put my local repository line at the top of the sources.list file to use this. But this is annoying, since I must open a terminal and do some typing on it every time I reinstall Ubuntu, though there is a better tool software-properties-gtk. I cannot use this tool since it place the source line at the end of `sources.list. And the real problem is that, the apt or synaptic always download a package from the source which is mentioned earlier, without inspecting whether the packages are already available in the local repository. So, I have no choice but to place the local source at the top of sources.list doing terminal (I actually don't hate terminal, but I need a solution) . I have tried this method. But this does not help me. My preference file is this in /etc/apt/preferences.d/local-pin-900 Package: * Pin: release o=Local,n=ubuntu-local Pin-Priority: 900 My release file is this Origin: Local Label: Local-Ubuntu Description: Local Ubuntu Repository Codename: ubuntu-local MD5Sum: ed43222856d18f389c637ac3d7dd6f85 1043412 Packages d41d8cd98f00b204e9800998ecf8427e 0 Sources When I enable the apt-preference, the apt-cache policy correctly shows the preference, e.g. It shows the local repository has the highest priority. But when I do this sudo apt-get install <package-name>, apt tries to download it from Internet. But when I place my local-repo at the top, it installs from local repository. So, My question is - 'Is it possible to force apt to use local repository when the package is available in local repository, without explicitly placing "the local source" at the top of my repository list (e.g sources.list file) ?' Edit: output of apt-cache policy $package_name is as follows nautilus-wipe: Installed: (none) Candidate: 0.1.1-2 Version table: 0.1.1-2 0 500 http://archive.ubuntu.com/ubuntu/ precise/universe i386 Packages 900 file:/media/Main/Linux-Software/Ubuntu/Precise/ Packages It is showing that my local repository has higher preference, though it is not the one which comes first in sources.list file. Here is the output of apt-get install nautilus-wipe Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: nautilus-wipe 0 upgraded, 1 newly installed, 0 to remove and 131 not upgraded. Need to get 30.7 kB of archives. After this operation, 150 kB of additional disk space will be used. 'http://archive.ubuntu.com/ubuntu/pool/universe/n/nautilus-wipe/nautilus-wipe_0.1.1-2_i386.deb' nautilus-wipe_0.1.1-2_i386.deb 30730 MD5Sum:7d497b8dfcefe1c0b51a45f3b0466994 It is still trying to get the file from Internet, though I think it should be happy with the local one.

    Read the article

  • Ubuntu web server cluster checks Ubuntu repository for script updates with cron

    - by StuartTheY
    I have a cluster of Ubuntu 12.04 web servers running a lamp stack. All of these servers are connected to a Load Balancer on Amazon Web Services. What I want to be able to do is have a dedicated Ubuntu server that I can update the PHP files on and have the other web servers check with cron to get the updates files from the repository. They don't have to use cron but that was the only thing I could think of, unless there was a way to have the updated repository tell them that it has updated files. And then how to transfer those files. Also if there is a ways for a server to check for updated files when it boots because I am going to be using auto scaling on AWS so when there is an increase in the load and another server gets created I need it to download the updated files from the repository when launched. Not sure how to transfer files from server to server.

    Read the article

  • How do I remove a repository of yum

    - by sunil
    When I search for a package in yum(centos 6), it tries to search in a repro named 'c6-media' And it gives a bunch of errors as follows file:///media/CentOS/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/CentOS/repodata/repomd.xml Trying other mirror. file:///media/cdrecorder/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/cdrecorder/repodata/repomd.xml Trying other mirror. file:///media/cdrom/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/cdrom/repodata/repomd.xml Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: c6-media. Please verify its path and try again Obviously the error seems to say that yum is trying to search for the CD/DVD which installed the OS. I do not have it now. All I want to do now is to delete this repository from yum. I went to the package manager graphical tool and removed this from the sources. Seems yum and the graphical tool do not use the same config. This is just my guess.

    Read the article

  • SmartSVN - Unable to create new repository profile

    - by Sandeepan Nath
    I have just installed SmartSVN on this fedora system. The application starts (on running ./smartsvn.sh) with its usual UI but many things are not working. Creating New repository profile Trying to create a new repository profile (Repositories- Repository Profiles- Add) An Error occurred while processing an SVN command - Cannot connect to 'svn+ssh://192.168.0.103': There was a problem while connecting to 192.168.0.103:22 Quick Checkout Trying to do Quick Checkout (less configuration) An Error occurred while processing an SVN command - Malformed XML. Some Observations When I run the smartsvn.sh file like this:- ./smartsvn.sh It shows this in the console - Warning: /bin/java does not exist Could not lock /root/.smartsvn/_lock_ Switched to running instance I was using SmartSVN in another system before this where it was working. There too, it was showing the warning like Warning: /bin/java does not exist but this part was not showing:- Could not lock /root/.smartsvn/_lock_ Switched to running instance I have only JRE installed in both the systems and not JDK. So, what could be the reason? Any pointers? Thanks, Sandeepan

    Read the article

  • Setting up a git repository on a server

    - by lostInTransit
    Hi I read through the other git questions here but couldn't really follow whether they are trying to do the same thing as I am. So if you find any duplicates, please let me know. I have a central server with SSO installed. All my machines are connected through the lan to this server. I have also setup a remote git repository on this server. Now what I'd like to do is make the server act as a central repository. All my employees can commit their code to the server and the server pushes it to the remote git repository. Also can I integrate it with SSO in any way? Can someone please help me out with this process? I am new to git and still learning how to use it effectively. So a step-by-step process or an existing document which I can refer to for this? Thanks.

    Read the article

  • Adding Thunderbird-stable repository gives "can't find signing_key_fingerprint" error

    - by EBV2010
    I'm trying to install Thunderbird 11 on Kubuntu 10.04. I was able to do it on the machine I'm working on. To get a clean process that I can roll out to other clients, I re-installed the machine and repeated the process. This is what I did (I've left out the sudo for clarity): add-apt-repository ppa:ubuntu-mozilla-security/ppa apt-get update add-apt-repository ppa:mozilla-team/thunderbird-stable The last one resulted in this error: Error: can't find signing_key_fingerprint at https://launchpad.net/api/1.0/~mozilla-team/+archive/thunderbird-stable The machine as it was before re-installation gave no such message. It was built from the same sources. Bottomline: I got Thunderbird 11.0 to run on Kubuntu 10.04 but after re-installation, adding the repository gives an error and won't add. Is there a way to solve the signing_key_fingerprint error?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >