Search Results

Search found 6882 results on 276 pages for 'git repository'.

Page 53/276 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • How to remove corrupted repositories ?

    - by istimsak abdulbasir
    I was in the process of updating 11.04 and came across and error message saying: Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_natty-security_main_i18n_Translation-en E: The package lists or status file could not be parsed or opened. I tried to remove the damage repository by going to ubuntu software center. There was no option of repository removal. Then I tried synaptic, however I got the same error message stated above. I cannot find software source in 11.04. How do I remove repository from the commandline since that seems to be my only option?

    Read the article

  • Using npm install as a MS-Windows system account

    - by Guss
    I have a node application running on Windows, which I want to be able to update automatically. When I run npm install -d as the Administrator account - it works fine, but when I try to run it through my automation software (that is running as local system), I get errors when I try to install a private module from a private git repository: npm ERR! git clone [email protected]:team/repository.git fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! Error: Command failed: fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! npm ERR! at ChildProcess.exithandler (child_process.js:637:15) npm ERR! at ChildProcess.EventEmitter.emit (events.js:98:17) npm ERR! at maybeClose (child_process.js:735:16) npm ERR! at Socket.<anonymous> (child_process.js:948:11) npm ERR! at Socket.EventEmitter.emit (events.js:95:17) npm ERR! at Pipe.close (net.js:451:12) npm ERR! If you need help, you may report this log at: npm ERR! <http://github.com/isaacs/npm/issues> npm ERR! or email it to: npm ERR! <[email protected]> npm ERR! System Windows_NT 6.1.7601 npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "-d" npm ERR! cwd D:\nodeapp npm ERR! node -v v0.10.8 npm ERR! npm -v 1.2.23 npm ERR! code 128 Just running git clone using the same system works fine. Any ideas?

    Read the article

  • Is it possible to install all packages from an APT repository?

    - by Kristoffer Hagen
    Is it possible to install all packages from an APT repository? I know it is possible to do it manually, but then you would need to know all the package names, and I don't. Any suggestions? Thanks. Update: Well, you guys are going to kill me for this, but the reason for my madness is that I want to install all the packages from BackTrack into my Ubuntu installation. I really don't like the idea of having it in a VM and having a separate partition for it is even more out of the question. I know that the folks at BackTrack doesn't like it when people leech their repositories, but that's what you get for releasing open source software. Stupid? maybe.. A valid reason? probably not.. Do I still want it? Yes. Another edit: I have now given up on this as it seems impossible to get it to work even by manually installing packages.

    Read the article

  • How can I use apt-get to resolve package dependencies when there are multiple versions in the repository?

    - by user1165144
    I've package a-package.deb which depends on b-package.deb in version 1.0. Everything works fine. But now a b-package in version 1.1 gets added to the repository. I'd suspect that apt-get installs the a-package and version 1.0 of the b-package. What really happens is, that a-package won't get installed: # apt-get install a-package Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: a-package : Depends: b-package (= 1.0) but 1.1 is to be installed E: Unable to correct problems, you have held broken packages. Is there a workaround to fix the behavior? Is there other software to use, that can handle the dependencies as defined?

    Read the article

  • How to get git-completion.bash to work on Mac OS X?

    - by n179911
    Hi, I have followed http://blog.bitfluent.com/page/3 to add git-completion.bash to my /opt/local/etc/bash_completion.d/git-completion and I put PS1='\h:\W$(__git_ps1 "(%s)") \u\$ ' in my .bashrc_profile But now I am getting this -bash: __git_ps1: command not found everything I do a cd. Can you please tell me what am I missing?

    Read the article

  • Git Shell in Windows: patch's default character encoding is UCS-2 Little Endian - how to change this to ANSI or UTF-8 without BOM?

    - by Sk8erPeter
    When creating a diff patch with Git Shell in Windows (when using GitHub for Windows), the character encoding of the patch will be UCS-2 Little Endian according to Notepad++ (see the screenshots below). How can I change this behavior, and force git to create patches with ANSI or UTF-8 without BOM character encoding? It causes a problem because UCS-2 Little Endian encoded patches can not be applied, I have to manually convert it to ANSI.

    Read the article

  • What is the best way to do development with git? [closed]

    - by marlene
    I have been searching the web for best practices, but don't see anything that is consistent. If you have an excellent development process that includes successful releases of your product as well as hotfixes/patches and maintenance releases and you use git. I would love to hear how you use git to accomplish this. Do you use branches, tags, etc? How do you use them? I am looking for details, please.

    Read the article

  • I added __git_ps1 to my PS1 in .bash_profile, now I'm getting (master) for all folders that aren't git repos.

    - by Matthew
    I'm on a Mac (10.6.5). Here's an example of what's going wrong: [m@m ~ (master)]$ cd ~/Documents [m@m ~/Documents (master)]$ cd ~/Applications [m@m ~/Applications (master)]$ cd ~/Library [m@m ~/Library (master)]$ cd ~/Sites/somesite [m@m ~/Sites/somerepo (FEATURE_SOMEFEATURE)]$ Here's the relevant contents of my .bash_profile: source ~/.git-completion.bash PS1='[\u@\h \w$(__git_ps1 " (%s)")]\$ ' I'm using the standard git-completion script - I just copied it to my home directory.

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • Using ISO Image with a Local Repository for updating Exadata Compute nodes

    - by Rene Kundersma
    For systems that cannot connect directly to Oracle ULN to build a local repository an ISO image file is made available by Oracle. This ISO image can be mounted and used as a local repository. The ISO image contains a file system that contains only the latest (x86_64) ULN channel and cannot be used to update the database servers to any other release than release 11.2.3.1.1. ISO and instructions can be found here  Rene Kundersma

    Read the article

  • Zsh super slow inside my Git repo

    - by Jason Swett
    My Zsh is super slow inside a certain Git repo of mine. When I Google "zsh git slow", I get a bunch of results about Git autocompletion being slow, but autocompletion isn't necessarily my problem; it's everything. I tried removing all plugins and that, strangely, didn't do anything at all when I opened a new shell. Zsh would still do Git stuff inside my Git repo. I found this snippet on this page: function git_prompt_info() { ref=$(git symbolic-ref HEAD 2> /dev/null) || return echo "$ZSH_THEME_GIT_PROMPT_PREFIX${ref#refs/heads/}$ZSH_THEME_GIT_PROMPT_SUFFIX" } That made everything fast again, but it also gave me a prompt that looks like this: ? snip git:(master Note the missing right parenthesis. That's kind of lame. Plus the whole thing just seems like a hack I shouldn't have to do. There's also this promising-looking SU question, but the links on the accepted answer are dead. How can I get my Zsh not to be slow inside a Git repo?

    Read the article

  • Specify private SSH-key to use when executing shell command with or without Ruby?

    - by Christoffer
    A rather unusual situation perhaps, but I want to specify a private SSH-key to use when executing a shell (git) command from the local computer. Basically like this: git clone [email protected]:TheUser/TheProject.git -key "/home/christoffer/ssh_keys/theuser" Or even better (in Ruby): with_key("/home/christoffer/ssh_keys/theuser") do sh("git clone [email protected]:TheUser/TheProject.git") end I have seen examples of connecting to a remote server with Net::SSH that uses a specified private key, but this is a local command. Is it possible? Thanks

    Read the article

  • Use the repository pattern when using PLINQO generated data?

    - by Chad
    I'm "upgrading" an MVC app. Previously, the DAL was a part of the Model, as a series of repositories (based on the entity name) using standard LINQ to SQL queries. Now, it's a separate project and is generated using PLINQO. Since PLINQO generates query extensions based on the properties of the entity, I started using them directly in my controller... and eliminated the repositories all together. It's working fine, this is more a question to draw upon your experience, should I continue down this path or should I rebuild the repositories (using PLINQO as the DAL within the repository files)? One benefit of just using the PLINQO generated data context is that when I need DB access, I just make one reference to the the data context. Under the repository pattern, I had to reference each repository when I needed data access, sometimes needing to reference multiple repositories on a single controller. The big benefit I saw on the repositories, were aptly named query methods (i.e. FindAllProductsByCategoryId(int id), etc...). With the PLINQO code, it's _db.Product.ByCatId(int id) - which isn't too bad either. I like both, but where it gets "harrier" is when the query uses predicates. I can roll that up into the repository query method. But on the PLINQO code, it would be something like _db.Product.Where(x = x.CatId == 1 && x.OrderId == 1); I'm not so sure I like having code like that in my controllers. Whats your take on this?

    Read the article

  • How can I sync files in two different git repositories (not clones) and maintain history?

    - by brian d foy
    I maintain two different git repos that need to share some files, and I'd like the commits in one repo to show up in the other. What's a good way to do that for ongoing maintenance? I've been one of the maintainers of the perlfaq (Github), and recently I fell into the role of maintaining the Perl core documentation, which is also in git. Long before I started maintaining the perlfaq, it lived in a separate source control repository. I recently converted that to git. Periodically, one of the perl5-porters would sync the shared files in the perlfaq repo and the perl repo. Since we've switched to git, we'e been a bit lazy converting the tools, and I'm now the one who does that. For the time being, the two repos are going to stay separate. Currently, to sync the FAQ for a new (monthly) release of perl, I'm almost ashamed to say that I merely copy the perlfaq*.pod files in the perlfaq repo and overlay them in the perl repo. That loses history, etc. Additionally, sometimes someone makes a change to those files in the perl repo and I end up overwriting it (yes, check git diff you idiot!). The files do not have the same paths in the repo, but that's something that I could change, I think. What I'd like to do, in the magical universe of rainbows and ponies, is pull the objects from the perlfaq repo and apply them in the perl repo, and vice-versa, so the history and commit ids correspond in each. Creating patches works, but it's also a lot work to manage it Git submodules seem to only work to pull in the entire external repo I haven't found something like svn's file externals, but that would work in both directions anyway I'd love to just fetch objects from one and cherry-pick them in the other What's a good way to manage this?

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • MVVM: where is the best place to integrate an id counter, in ViewModel or Repository or... ?

    - by msfanboy
    Hello, I am using http://loungerepo.codeplex.com/ this library needs a unique id when I persist my entities to the repository. I decided for integer not Guid. The question is now where do I retrieve a new integer and how do I do it? This is my current approach in the SchoolclassAdministrationViewModel.cs: public SchoolclassAdministrationViewModel() { _schoolclassRepo = new SchoolclassRepository(); _pupilRepo = new PupilRepository(); _subjectRepo = new SubjectRepository(); _currentSchoolclass = new SchoolclassModel(); _currentPupil = new PupilModel(); _currentSubject = new SubjectModel(); ... } private void AddSchoolclass() { // get the last free id for a schoolclass entity _currentSchoolclass.SchoolclassID = _schoolclassRepo.LastID; // add the new schoolclass entity to the repository _schoolclassRepo.Add(SchoolclassModel.SchoolclassModelToSchoolclass(_currentSchoolclass)); // add the new schoolclass entity to the ObservableCollection bound to the View Schoolclasses.Add(_currentSchoolclass); // Create a new schoolclass entity and the bound UI controls content gets cleaned CurrentSchoolclass = new SchoolclassModel(); } public class SchoolclassRepository : IRepository<Schoolclass> { private int _lastID; public SchoolclassRepository() { _lastID = FetchLastId(); } public void Add(Schoolclass entity) { //repo.Store(entity); } private int FetchLastId() { return // repo.GetIDOfLastEntryAndDoInc++ } public int LastID { get { return _lastID; } } } Explanation: Every time the user switches to the SchoolclassAdministrationViewModel which is datatemplated with a UserControl the saVM Ctor is called and the schoolclass repository is created wherein the FetchLastId() is called and I am up to date with the last ID doing a inc++ on it to get the free one... Do you have any better ideas? What I do not like about my current apporach: -Having a private method in repositry class because a repositry is to fetch data only not "entity logic" like the counter - Having to access from the ViewModel a public property - located in the repository -, actually its not the ViewModel concern to get a entity id and assign it. Actually the ViewModel should ask for a schoolclass POCO and get a SchoolclassModel to bind to the UI. But then I have again to re-read the Schoolclass properties into the SchoolclassModel properties what I want to avoid.

    Read the article

  • How do you prevent Git from printing 'remote:' on each line of the output of a post-recieve hook?

    - by Matt Hodan
    I recently configured an EC2 instance with a Git deployment workflow that resembles Heroku, but I can't seem to figure out how Heroku prevents the Git post-receive hook from outputting 'remote:' on each line. Consider the following two examples (one from my EC2 project and one from a Heroku project): My EC2 project: git push prod master Counting objects: 9, done. Delta compression using up to 2 threads. Compressing objects: 100% (5/5), done. Writing objects: 100% (5/5), 456 bytes, done. Total 5 (delta 3), reused 0 (delta 0) remote: remote: Receiving push remote: Deploying updated files (by resetting HEAD) remote: HEAD is now at bf17da8 test commit remote: Running bundler to install gem dependencies remote: Fetching source index for http://rubygems.org/ remote: Installing rake (0.8.7) remote: Installing abstract (1.0.0) ... remote: Installing railties (3.0.0) remote: Installing rails (3.0.0) remote: Your bundle is complete! It was installed into ./.bundle/gems remote: Launching (by restarting Passenger)... done remote: To ssh://[email protected]/~/apps/app_name e8bd06f..bf17da8 master -> master Heroku: $> git push heroku master Counting objects: 179, done. Delta compression using up to 2 threads. Compressing objects: 100% (89/89), done. Writing objects: 100% (105/105), 42.70 KiB, done. Total 105 (delta 53), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected -----> Gemfile detected, running Bundler version 1.0.3 Unresolved dependencies detected; Installing... Using --without development:test Fetching source index for http://rubygems.org/ Installing rake (0.8.7) Installing abstract (1.0.0) ... Installing railties (3.0.0) Installing rails (3.0.0) Your bundle is complete! It was installed into ./.bundle/gems Compiled slug size is 4.8MB -----> Launching... done http://your_app_name.heroku.com deployed to Heroku To [email protected]:your_app_name.git 3bf6e8d..642f01a master -> master

    Read the article

  • error: a NUL byte in commit log message not allowed [migrated]

    - by James
    I'm trying to commit some files in my Git repository, and I'm receiving this error. This all started when I ran git rm -rf folder and git rm -rf file and tried to commit the changes. I've since been able to commit and push without these files being deleted from my remote repository, however I'm now completely stuck. The full error is: error: a NUL byte in commit log message not allowed. fatal: failed to write commit object What can I do to fix this? My Google-fu has let me down on this one. Edit: I've just checked out these deleted files, and attempted to commit again, but it's still giving me the same error. Has my Git repo been corrupted or something?

    Read the article

  • Problem Installing older TestNG plugin on Eclipse 3.5

    - by Stefan
    I'm trying to install TestNG 5.11 on eclipse 3.5 and gettign the following. eclipse.buildId=unknown java.version=1.6.0_19 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=no_NO Framework arguments: -product org.eclipse.epp.package.jee.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.jee.product Error Mon Jun 07 15:45:53 CEST 2010 Artifact not found: org.eclipse.update.feature,org.testng.eclipse,5.11.0.28. java.io.FileNotFoundException: "http://beust.com/eclipse/features/org.testng.eclipse_5.11.0.28.jar" at org.eclipse.equinox.internal.p2.repository.RepositoryStatusHelper.checkFileNotFound(RepositoryStatusHelper.java:289) at org.eclipse.equinox.internal.p2.repository.FileReader.checkException(FileReader.java:352) at org.eclipse.equinox.internal.p2.repository.FileReader.sendRetrieveRequest(FileReader.java:326) at org.eclipse.equinox.internal.p2.repository.FileReader.readInto(FileReader.java:263) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:71) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:127) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:468) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:451) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:518) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.getArtifact(MirrorRequest.java:200) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transferSingle(MirrorRequest.java:175) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transfer(MirrorRequest.java:159) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.perform(MirrorRequest.java:95) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:507) at org.eclipse.equinox.internal.p2.artifact.repository.simple.DownloadJob.run(DownloadJob.java:64) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Error Mon Jun 07 15:45:53 CEST 2010 Artifact not found: osgi.bundle,org.testng.eclipse,5.11.0.28. java.io.FileNotFoundException: "http://beust.com/eclipse/plugins/org.testng.eclipse_5.11.0.28.jar" at org.eclipse.equinox.internal.p2.repository.RepositoryStatusHelper.checkFileNotFound(RepositoryStatusHelper.java:289) at org.eclipse.equinox.internal.p2.repository.FileReader.checkException(FileReader.java:352) at org.eclipse.equinox.internal.p2.repository.FileReader.sendRetrieveRequest(FileReader.java:326) at org.eclipse.equinox.internal.p2.repository.FileReader.readInto(FileReader.java:263) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:71) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:127) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:468) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:451) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:518) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.getArtifact(MirrorRequest.java:200) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transferSingle(MirrorRequest.java:175) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transfer(MirrorRequest.java:159) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.perform(MirrorRequest.java:95) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:507) at org.eclipse.equinox.internal.p2.artifact.repository.simple.DownloadJob.run(DownloadJob.java:64) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Error Mon Jun 07 15:45:53 CEST 2010 session context was:(profile=epp.package.jee, phase=org.eclipse.equinox.internal.provisional.p2.engine.phases.Collect, operand=, action=). I'm kinda stuck so I would really like help. Thanks

    Read the article

  • How does one handle sensitive data when using Github and Heroku?

    - by Jonas
    I am not yet accustomed with the way Git works (And wonder if someone besides Linus is ;)). If you use Heroku to host you application, you need to have your code checked in a Git repo. If you work on an open-source project, you are more likely going to share this repo on Github or other Git hosts. Some things should not be checked in the public repo; database passwords, API keys, certificates, etc... But these things still need to be part of the Git repo since you use it to push your code to Heroku. How to work with this use case? Note: I know that Heroku or PHPFog can use server variables to circumvent this problem. My question is more about how to "hide" parts of the code.

    Read the article

  • Agile version control?

    - by Paul Dixon
    I'm trying to work out a good method to manage code changes on a large project with multiple teams. We use subversion at the moment, but I want more flexibility in building a new release than I seem to be able to get with subversion. Here's roughly I want: for each developer to create easily identifiable patches for the project. Each patch delivers a complete user story (a releasable feature or fix). It might encompass many changes to many files. developers are able to easily apply and remove their own and other patches to facilitate testing release manager selects the patches to be used in the next release into a new branch branch is tested, fixes merged in, and ultimately merged into live teams can then pull these changes back down into their sandboxes. I'm looking at stacked git as a way of achieving this, but what other tools or techniques can deliver this sort of workflow?

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • How Can I Point My Local Testing Server at My GitHub Repository?

    - by Goober
    Up until a few days ago, I had a particular setup that was as follows. Using SVN, all of the websites that I developed were committed to a source control drop box on a local testing server. Then using IIS, a new website was set up to point at the last revision of each particular website I developed and display it to the outside world using a specific URL. I have just moved over to using git and github, meaning all of my source controlled code is now no longer stored on a local testing server. As a result of this, I am not sure how I can go about doing a similar thing to what I did with the SVN setup, however I need to be able to essentially have that same setup again, just using Git. So basically, how can I go about getting my local testing server to point at the GitHub repository for that site? Help greatly appreciated.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >