Search Results

Search found 9301 results on 373 pages for 'git filter branch'.

Page 52/373 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Git - tidying up a repo

    - by Simon Woods
    Hi I have got my repo into a bit of a state and want to be able to work my way out of it The repo looks a bit like this (A1, B1, C1 etc are obviously commits) A1 ---- A2 ---- A3 ---- A4 ---- A5 ---- A6 ---- A7 ---- A8 / (from a remote repo) B1 ---- B2 --------------------------------- | \ \ C1 ---------------------------------C2 \ / D1 --- D2 --- D3 --- D4 --- D5 --- D6 Ideally I'd like to be able to remove all the revisions (with rebase?) on the B, C and D lines (I'm loathed to say branches simply because there are now no local branches on these lines except ref branches to the remote repo) and try to merge in the remote repo again, perhaps in a better way. I'd be grateful of any suggestions as to how to get rid of all these commits. Could I ask that any answers use revision SHA1s rather than branch names. I thought that somehow I'd be able to revert the merge into A7 but can't quite work out how to do it I hope that is sufficient information. Many thx Simon

    Read the article

  • How to safely backport specific linux kernel commits to an older kernel using git

    - by superc0w
    I'm currently on a stable 2.6.32 kernel. But I need certain fixes on 2.6.33 branch to be incorporated into this 2.6.32 kernel so that I can create a custom kernel for testing purposes. I can't apply the said fixes directly to the 2.6.32 source because they seem to have dependencies on other fixes. Is there any safe way to incorporate only the fixes (and all their dependencies) I need into the 2.6.32 kernel with git to create a custom kernel? Assuming there is a way to do the above, is there a way to track the fixes that have been applied to the custom kernel (i.e. track which commits have been applied to the 2.6.32 kernel to create the custom kernel source)?

    Read the article

  • How to force rebase when same changes applied to both branches manually?

    - by Dmitry
    My repository looks like: X - Y- A - B - C - D - E branch:master \ \ \ \ merge master -> release \ \ M --- BCDE --- N branch:release Here "M - BCDE - N" are manually (unfortunately!) applied changes approximately same as separate commits "A - B - C - D - E" (but seems GIT does not know that these changes are the same). I'd like to rebase and get the following structure: X - Y- A - B - C - D - E branch:master \ * branch:release I.e. I want to make branch:release to be exactly the same as branch:master and fork it from the master's HEAD. But when I run "git rebase master" sitting at the branch release, GIT reports about lots of conflicts and refuces rebasing. How could I solve this? Other explaination of this: I'd like to "re-create" branch:release from scratch from master's HEAD. And there are a lot of other people who had already made "git pull" for the branch:release, so I cannot use git reset + git push -f.

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • Custom fail2ban Filter

    - by Michael Robinson
    In my quest to block excessive failed phpMyAdmin login attempts with fail2ban, I've created a script that logs said failed attempts to a file: /var/log/phpmyadmin_auth.log Custom log The format of the /var/log/phpmyadmin_auth.log file is: phpMyadmin login failed with username: root; ip: 192.168.1.50; url: http://somedomain.com/phpmyadmin/index.php phpMyadmin login failed with username: ; ip: 192.168.1.50; url: http://192.168.1.48/phpmyadmin/index.php Custom filter [Definition] # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; phpMyAdmin jail [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 6 The fail2ban log contains: 2012-10-04 10:52:22,756 fail2ban.server : INFO Stopping all jails 2012-10-04 10:52:23,091 fail2ban.jail : INFO Jail 'ssh-iptables' stopped 2012-10-04 10:52:23,866 fail2ban.jail : INFO Jail 'fail2ban' stopped 2012-10-04 10:52:23,994 fail2ban.jail : INFO Jail 'ssh' stopped 2012-10-04 10:52:23,994 fail2ban.server : INFO Exiting Fail2ban 2012-10-04 10:52:24,253 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.6 2012-10-04 10:52:24,253 fail2ban.jail : INFO Creating new jail 'ssh' 2012-10-04 10:52:24,253 fail2ban.jail : INFO Jail 'ssh' uses poller 2012-10-04 10:52:24,260 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,260 fail2ban.filter : INFO Set maxRetry = 6 2012-10-04 10:52:24,261 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,261 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,279 fail2ban.jail : INFO Creating new jail 'ssh-iptables' 2012-10-04 10:52:24,279 fail2ban.jail : INFO Jail 'ssh-iptables' uses poller 2012-10-04 10:52:24,279 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set maxRetry = 5 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,280 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,287 fail2ban.jail : INFO Creating new jail 'fail2ban' 2012-10-04 10:52:24,287 fail2ban.jail : INFO Jail 'fail2ban' uses poller 2012-10-04 10:52:24,287 fail2ban.filter : INFO Added logfile = /var/log/fail2ban.log 2012-10-04 10:52:24,287 fail2ban.filter : INFO Set maxRetry = 3 2012-10-04 10:52:24,288 fail2ban.filter : INFO Set findtime = 604800 2012-10-04 10:52:24,288 fail2ban.actions: INFO Set banTime = 604800 2012-10-04 10:52:24,292 fail2ban.jail : INFO Jail 'ssh' started 2012-10-04 10:52:24,293 fail2ban.jail : INFO Jail 'ssh-iptables' started 2012-10-04 10:52:24,297 fail2ban.jail : INFO Jail 'fail2ban' started When I issue: sudo service fail2ban restart fail2ban emails me to say ssh has restarted, but I receive no such email about my phpmyadmin jail. Repeated failed logins to phpMyAdmin does not cause an email to be sent. Have I missed some critical setup? Is my filter's regular expression wrong? Update: added changes from default installation Starting with a clean fail2ban installation: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Change email address to my own, action to: action = %(action_mwl)s Append the following to jail.local [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 4 Add the following to /etc/fail2ban/filter.d/phpmyadmin.conf # phpmyadmin configuration file # # Author: Michael Robinson # [Definition] # Option: failregex # Notes.: regex to match the password failures messages in the logfile. The # host must be matched by a group named "host". The tag "<HOST>" can # be used for standard IP/hostname matching and is only an alias for # (?:::f{4,6}:)?(?P<host>\S+) # Values: TEXT # # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; # Option: ignoreregex # Notes.: regex to ignore. If this regex matches, the line is ignored. # Values: TEXT # # Ignore our own bans, to keep our counts exact. # In your config, name your jail 'fail2ban', or change this line! ignoreregex = Restart fail2ban sudo service fail2ban restart PS: I like eggs

    Read the article

  • Custom fail2ban Filter for phpMyadmin bruteforce attempts

    - by Michael Robinson
    In my quest to block excessive failed phpMyAdmin login attempts with fail2ban, I've created a script that logs said failed attempts to a file: /var/log/phpmyadmin_auth.log Custom log The format of the /var/log/phpmyadmin_auth.log file is: phpMyadmin login failed with username: root; ip: 192.168.1.50; url: http://somedomain.com/phpmyadmin/index.php phpMyadmin login failed with username: ; ip: 192.168.1.50; url: http://192.168.1.48/phpmyadmin/index.php Custom filter [Definition] # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; phpMyAdmin jail [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 6 The fail2ban log contains: 2012-10-04 10:52:22,756 fail2ban.server : INFO Stopping all jails 2012-10-04 10:52:23,091 fail2ban.jail : INFO Jail 'ssh-iptables' stopped 2012-10-04 10:52:23,866 fail2ban.jail : INFO Jail 'fail2ban' stopped 2012-10-04 10:52:23,994 fail2ban.jail : INFO Jail 'ssh' stopped 2012-10-04 10:52:23,994 fail2ban.server : INFO Exiting Fail2ban 2012-10-04 10:52:24,253 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.6 2012-10-04 10:52:24,253 fail2ban.jail : INFO Creating new jail 'ssh' 2012-10-04 10:52:24,253 fail2ban.jail : INFO Jail 'ssh' uses poller 2012-10-04 10:52:24,260 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,260 fail2ban.filter : INFO Set maxRetry = 6 2012-10-04 10:52:24,261 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,261 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,279 fail2ban.jail : INFO Creating new jail 'ssh-iptables' 2012-10-04 10:52:24,279 fail2ban.jail : INFO Jail 'ssh-iptables' uses poller 2012-10-04 10:52:24,279 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set maxRetry = 5 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,280 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,287 fail2ban.jail : INFO Creating new jail 'fail2ban' 2012-10-04 10:52:24,287 fail2ban.jail : INFO Jail 'fail2ban' uses poller 2012-10-04 10:52:24,287 fail2ban.filter : INFO Added logfile = /var/log/fail2ban.log 2012-10-04 10:52:24,287 fail2ban.filter : INFO Set maxRetry = 3 2012-10-04 10:52:24,288 fail2ban.filter : INFO Set findtime = 604800 2012-10-04 10:52:24,288 fail2ban.actions: INFO Set banTime = 604800 2012-10-04 10:52:24,292 fail2ban.jail : INFO Jail 'ssh' started 2012-10-04 10:52:24,293 fail2ban.jail : INFO Jail 'ssh-iptables' started 2012-10-04 10:52:24,297 fail2ban.jail : INFO Jail 'fail2ban' started When I issue: sudo service fail2ban restart fail2ban emails me to say ssh has restarted, but I receive no such email about my phpmyadmin jail. Repeated failed logins to phpMyAdmin does not cause an email to be sent. Have I missed some critical setup? Is my filter's regular expression wrong? Update: added changes from default installation Starting with a clean fail2ban installation: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Change email address to my own, action to: action = %(action_mwl)s Append the following to jail.local [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 4 Add the following to /etc/fail2ban/filter.d/phpmyadmin.conf # phpmyadmin configuration file # # Author: Michael Robinson # [Definition] # Option: failregex # Notes.: regex to match the password failures messages in the logfile. The # host must be matched by a group named "host". The tag "<HOST>" can # be used for standard IP/hostname matching and is only an alias for # (?:::f{4,6}:)?(?P<host>\S+) # Values: TEXT # # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; # Option: ignoreregex # Notes.: regex to ignore. If this regex matches, the line is ignored. # Values: TEXT # # Ignore our own bans, to keep our counts exact. # In your config, name your jail 'fail2ban', or change this line! ignoreregex = Restart fail2ban sudo service fail2ban restart PS: I like eggs

    Read the article

  • Disable or remove filter driver for single HID device

    - by snoopen
    Running Windows XP in a corporate setting here. I have an issue where a filter driver is interfering with the functionality of different USB HIDs. For example graphics tablets do not respond while the filter driver is in place. I've also had the issue with foot pedals used with transcription software. My question is really two fold: A) what makes Windows use a filter driver on one HID but not another? B) when a filter driver is causing conflicts how can I disable it on the affected devices? Background I've previously narrowed down the issue to the filter driver by uninstalling the software (Funk Proxy Host) responsible for the filter driver. The software is a type of RDP we use here at work. (I might have even booted into safe mode and renamed the file, I forget). I believe the filter driver is present to disable or modify the use of the local keyboard and mouse while admin staff are assisting users. Either way I don't have the authority to just go uninstalling this software. As far as I can tell the software versions are the same, however I'm not sure if the device driver definitions are all the same as I don't know where these things would be located. To check for the presence of the filter driver I locate the hardware device in Device Manager, click Properties Driver tab Driver Details.... It shows up as ph32ihid.sys. Even though all machines are meant to have the same SOE and do have Funk Proxy Host installed I don't always have issues with the same HIDs. A few machines here the foot pedals without any issues. I've not had any machines work with the graphics tablet without uninstalling Funk software. Driver details I've just read up a bit more about filter drivers and found the drivers description in the registry under "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ProxyHostHIDFilter" There it's called "Kernel-mode HID filter driver for the Proxy Host". Presumably I could also disable it here but that would be system wide which is probably not desirable?

    Read the article

  • How can I mark a group of changes/changesets in SVN, Hg, or Git

    - by sylvanaar
    I would like to mark an arbitrary group of commits/changesets with a label. Commit 1 *Mark 1 Commit 2 *Mark 2 Commit 3 Commit 4 *Mark 1 Commit 5 *Mark 2 The goal is to easily locate all the changes for a specific mark, and to have that grouping persisted in the VCS directly, as opposed to some outside system like a bug tracking system. The location and ordering of the marks needs to be arbitrary, and should be able to work with both committed/uncommitted and pushed/unpushed changes. In SVN the best way I know is to just edit the commit notes and add some sort of special text that you can search for e.g. "**Mark 1". Or just to make a fake edit and commit it and use its commit note to list all the included revisions. Is there a better solution for SVN? Are there equivalent or better solutions for Hg or Git?

    Read the article

  • Git fails when pushing commit to github

    - by Steve Melvin
    I cloned a git repo that I have hosted on github to my laptop. I was able to successfully push a couple of commits to github without problem. However, now I get the following error: Compressing objects: 100% (792/792), done. error: RPC failed; result=22, HTTP code = 411 Writing objects: 100% (1148/1148), 18.79 MiB | 13.81 MiB/s, done. Total 1148 (delta 356), reused 944 (delta 214) From here it just hangs and I finally have to ^C back to the terminal.

    Read the article

  • Trying to find a good strategy using Git for personal development on local/personal machine

    - by AJ
    A noob here. I have a personal Macbook and I want to use Git to track the changes etc. I want to just init a repo on my macbook and work there. Is this a good idea? What if: I have a main repo somewhere in my Macbook HD, like, /Users/user/projects/project1 and clone it to another area on my macbook where I actually perform development? But there is a lot of redundancy in this. I am a little confused and want to know what are the usual steps folks take in a similar personal development environment. Thanks a lot.

    Read the article

  • Git repository is out of sync after rebase

    - by Keyo
    I have squashed 2 commits (A and B) into one new commit (C). The previous two commits (A and B) where removed. I pushed these commits from my development repo to a central(bare) repository. The git-log on both repos confirms that commits A and B have been removed. The problem is when I do a pull on a third repository which already had (A and B) it now has all three commits (A, B and C). I would have thought the pull would synchronise these changes. Do I need to checkout A~1 and then merge in the new changes? This seems like a hassle, especially in a production environment.

    Read the article

  • Git Subtree. Why can't I branch from a subtree rather than the root?

    - by dugla
    I am struggling trying to make sense of using the Git subtree strategy. My intent was to pull some disparate repos together into a little family of toy repos under an umbrella repo. I'm using the subtree strategy detailed here: http://help.github.com/subtree-merge I am pulling my hair out trying to convince Git that I want to create a branch from one of these subtrees NOT from the root. When I cd into a subtree, create the branch, and then cd back to the root, running git branch from the root clearly indicates the branch was created at the root. Sigh. I love git/github but it is maddening getting this seemingly routine task to work properly. Could someone please enlighten me?

    Read the article

  • Preview a git push

    - by Saverio Miroddi
    How can I see which commits are actually going to be pushed to a remote repository? As far as I know, whenever I pull master from the remote repository, commits are likely to be generated, even if they're empty. This causes the local master to be 'forward' even if there is really nothing to push. Now, if I try (from master): git cherry origin master I have an idea of what's going to be pushed, though this also display some commits that I've already pushed. Is there a way to display only the new content that's going to be pushed?

    Read the article

  • Servlet Filter: Socket need to be referenced in doFilter()

    - by Craig m
    Right now I have a filter that has the sockets opened in the init and for some reason when I open them in doFilter() it doesn't work with the server app right so I have no choice but to put it in the init I need to be able to reference the outSide.println("test"); in doFilter() so I can send that to my server app every time the if statement it in is is tripped. Heres my code: import java.net.*; import java.io.*; import java.util.*; import javax.servlet.*; import javax.servlet.http.*; public final class IEFilter implements Filter { public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { String browser = ""; String blockInfo; String address = request.getRemoteAddr(); if(((HttpServletRequest)request).getHeader ("User-Agent").indexOf("MSIE") >= 0) { browser = "Internet Explorer"; } if(browser.equals("Internet Explorer")) { BufferedWriter fW = new BufferedWriter(new FileWriter("C://logs//IElog.rtf")); blockInfo = "Blocked IE user from:" + address; response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.println("<HTML>"); out.println("<HEAD>"); out.println("<TITLE>"); out.println("This page is not available - JNetProtect"); out.println("</TITLE>"); out.println("</HEAD>"); out.println("<BODY>"); out.println("<center><H1>Error 403</H1>"); out.println("<br>"); out.println("<br>"); out.println("<H1>Access Denied</H1>"); out.println("<br>"); out.println("Sorry, that resource may not be accessed now."); out.println("<br>"); out.println("<br>"); out.println("<hr />"); out.println("<i>Page Filtered By JNetProtect</i>"); out.println("</BODY>"); out.println("</HTML>"); //init.outSide.println("Blocked and Internet Explorer user"); fW.write(blockInfo); fW.newLine(); fW.close(); } else { chain.doFilter(request, response); } } public void destroy() { outsocket.close(); outSide.close(); } public void init(FilterConfig filterConfig) { try { ServerSocket fs; Socket outsocket; PrintWriter outSide ; outsocket = new Socket("Localhost", 1337); outSide = new PrintWriter(outsocket.getOutputStream(), true); }catch (Exception e){ System.out.println("error with this connection"); e.printStackTrace();} } }

    Read the article

  • How can I view multiple git diffs side by side in vim

    - by Pete Hodgson
    I'd like to be able to run a command that opens up a git diff in vim, with a tab for each file in the diff set. So if for example I've changed files foo.txt and bar.txt in my working tree and I ran the command I would see vim open with two tabs. The first tab would contain a side-by-side diff between foo.txt in my working tree and foo.txt in the repository, and the second tab would contain a side-by-side diff for bar.txt. Anyone got any ideas?

    Read the article

  • Excluding files from being deployed with Capistrano while still under version control with Git

    - by Jimmy Cuadra
    I want to start testing the JavaScript in my Rails apps with qUnit and I'm wondering how to keep the test JavaScript and test runner HTML page under version control (I'm using Git, of course) but keep them off the production server when I deploy the app with Capistrano. My first thought is to let Capistrano send all the code over as usual including the test files, and write a task to delete them at the end of the deployment process. This seems like sort of a hack, though. Is there a cleaner way to tell Capistrano to ignore certain parts of the repository when deploying?

    Read the article

  • Git pre-receive hook

    - by Ralphz
    Hi When you enable pre-receive hook for git repository: It takes no arguments, but for each ref to be updated it receives on standard input a line of the format: < old-value SP < new-value SP < ref-name LF where < old-value is the old object name stored in the ref, < new-value is the new object name to be stored in the ref and is the full name of the ref. When creating a new ref, < old-value is 40 0. Does anyone can explain me how do I examine all the files that will be changed in the repository if i allow this commit? I'd like to run that files through some scripts to check syntax and so on. Thanks.

    Read the article

  • Git pack file entry format

    - by Ben Collins
    My understanding of the Git pack file format is something like: Where the table is 32-bits wide, and the first three 32-bit words are the pack file header. The last row of 32 bits are the first 4 bytes of an entry. As I understand it, the size of the entry is specified by consecutive bytes with the MSB set, followed by compressed data. In the first byte whose MSB is not set, is the MSB part of the compressed data, or is it a gap? If it's part of the compressed data, how can you guarantee that when the data is compressed that bit won't be set?

    Read the article

  • Git: how do you merge with remote repo?

    - by Marco
    Please help me understand how git works. I clone my remote repository on two different machines. I edit the same file on both machines. I successfully commit and push the update from the first machine to the remote repository. I then try to push the update on the second machine, but get an error: ! [rejected] master -> master (non-fast-forward) I understand why I received the error. How can I merge my changes into the remote repo? Do I need to pull the remote repo first?

    Read the article

  • Automatically pulling on remote server with Git push?

    - by Vernon
    Here's what I'm trying to do: I have a GitHub repository, a portion of which I'd like to make web viewable. Right now I've cloned the repository on my own server and it works well, but in order to keep it up to date, I have to manually login and pull the latest changes. I'm not sure if this is the best idea (or the best approach), but I'd like the remote server to automatically pull whenever someone pushes to repository. GitHub makes it easy enough to run a script when someone pushes, but I'm not sure how to pull once someone does that. I was using PHP for simplicity, but just doing something like git pull naturally doesn't work because of permissions. Is this a bad idea or is there another way of achieving what I want to do? This seems like a common set up, but I wasn't sure. Thanks.

    Read the article

  • Setting up a Git remote with a truncated history

    - by drg
    I am in the midst of doing some non-standard, probably doomed, experiments on a git repository. The goal is to create a remote repository with a truncated history which can still share commits with an internal repository which has a full history. I've had some success using a graft to connect the public history with the private history - when I push from my internal repository, only the post-graft contents are included. So my main question is: what is the simplest way of taking a commit, eliminating its parent and writing a graft in place of the parent? A more general question: is what I'm trying to do going to cause me pain in the long run, do you know if there's a better way?

    Read the article

  • Deleting branches in git causes gitk to go wild

    - by a2h
    I decided to delete a few branches from a (personal project) repository of mine that were merged into master after confirming on #git that leftover branches aren't really necessary. However, gitk's visualisation of my repository's history as a result has been completely screwed up. Basically something like this: With those branches from commits appearing out of nowhere eventually going back into some other commits up ahead. A merge did not occur at all of the points, and I only had around 5 extra branches. Is this normal? Is there any fix for this?

    Read the article

  • Is there a database with git-like qualities?

    - by Mat
    I'm looking for a database where multiple users can contribute and commit new data; other users can then pull that data into their own database repository, all in a git-like manner. A transcriptional database, if you like; does such a thing exist? My current thinking is to dump the database to a single file as SQL, but that could well get unwieldy once it is of any size. Another option is to dump the database and use the filesystem, but again it gets unwieldy once of any size.

    Read the article

  • How to keep .cproject local to each user while working collaboratively through git

    - by Don't panic
    I have a C++ project that I am working on with several other people. Some of us have Macs with OSX and some of us have PCs with either Windows 7 or Windows 8.1. We are currently using eclipse to edit the project and git for version control. The problem is that whenever you change property settings on one team member's computer the .cproject file is updated. Because different configurations/ file extensions are used across OSX and Windows we want the .cproject file to remain local. We have tried untracking .cproject through a gitignore for the .cproject file, but that just removes the .cproject file from the repository all together. We have also tried setting up an assumed-unchanged for .cproject but if .cproject is changed all this leads to is the need to manually deal with conflicts and updates. Is there any way to keep the file in the repository, but only change it locally? Ie merging would not update the .cproject file.

    Read the article

  • Problems getting git 'server' to work on Windows

    - by Benjol
    I've followed the Tim's article (mentioned in the answer to this question), but - like many others, it seems - I'm stuck when trying do the test clone at the end. I get the fatal: the remote end hung up unexpectedly error, even though my $HOME path seems to be right. Anyone got any pointers to where I might start for debugging this? My git and linux-fu are severely limited... I'm aware of this answer to the same question, but it doesn't apply in my case, I don't get any messages about paths.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >