Search Results

Search found 9301 results on 373 pages for 'git filter branch'.

Page 84/373 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • MongoDb - $match filter not working in subdocument

    - by Ranjith
    This is Collection Structure [{ "_id" : "....", "name" : "aaaa", "level_max_leaves" : [ { level : "ObjectIdString 1", max_leaves : 4, } ] }, { "_id" : "....", "name" : "bbbb", "level_max_leaves" : [ { level : "ObjectIdString 2", max_leaves : 2, } ] }] I need to find the subdocument value of level_max_leaves.level filter when its matching with given input value. And this how I tried, For example, var empLevelId = 'ObjectIdString 1' ; MyModel.aggregate( {$unwind: "$level_max_leaves"}, {$match: {"$level_max_leaves.level": empLevelId } }, {$group: { "_id": "$level_max_leaves.level", "total": { "$sum": "$level_max_leaves.max_leaves" }}}, function (err, res) { console.log(res); }); But here the $match filter is not working. I can't find out exact results of ObjectIdString 1 If I filter with name field, its working fine. like this, {$match: {"$name": "aaaa" } }, But in subdocument level its returns 0. {$match: {"$level_max_leaves.level": "ObjectIdString 1"} }, My expected result was, { "_id" : "ObjectIdString 1", "total" : 4, }

    Read the article

  • Possible to set filter on subform from parent form before subform data loads

    - by tbone
    I have frmParentForm with multiple controls used to build a filter for frmSubForm. On frmParentForm_Load, I am doing (simplified example): Me.sbfInvoice_List.Form.filter = "[created_on] >= #" & Me.RecentOrderDateCutoff & "#" Me.sbfInvoice_List.Form.FilterOn = True The problem is, on initial load, it seems the subform load is occurring first, so the entire table is loaded. Is there a way (in a different event perhaps) to properly set the subform filter from the parent form so it is applied before the subform does its initial data load? (The subform can exist on its own, or as a child of many different parent forms (sometimes filtered, sometimes not), so I'd rather not put some complicated hack in the subform itself to accomplish this.)

    Read the article

  • Filter records based on Date Range + ASP.NET + Regex + Javascript

    - by ASIF
    Hi I need to filter data based on a date range. My table has a field Process date. I need to filter the records and display those in the range FromDate to ToDate. How do I write a function in VB.NET which can help me filter the data. Protected Shared Function ObjectInRange(ByRef obj As Object, ByVal str1 As String, ByVal str2 As String) As Boolean Dim inRange = False For Each prop As PropertyInfo In obj.GetType().GetProperties() Dim propVal = prop.GetValue(obj, Nothing) If propVal Is Nothing Then Continue For End If Dim propValString = Convert.ToString(propVal) If Regex....WHAT GOES HERE? Then inRange = True Exit For End If Next Return inRange End Function Am I on the right track??

    Read the article

  • Drupal, Views, Exposed filter: custom default selected tags

    - by Patrick
    hi I'm using Drupal, Views with a exposed filter (using taxonomy). My customer wants to set from back-end the default selected tags (in order to pre-filter the views content). In the exposed filter settings, there is functionality. However, it doesn't work properly: when I click on "Select None" link (I'm using better_exposed_filter module) I expect none of the tags are selected, instead, the default configuration (default selected tags) are selected, so actually it doesn't work anymore. I hope it is clear. So, I was wondering if I can prepare a custom menu in which my user can check/uncheck the default selected tags in the view. thanks

    Read the article

  • Badword filter in PHP?

    - by morpheous
    I am writing a badword filter in PHP. I have a list of badwords in an array and the method cleanse_text() is written like this: public static function cleanse_text($originalstring){ if (!self::$is_sorted) self::doSort(); return str_ireplace(self::$badwords, '****', $originalstring); } This works trivially, for exact matches, but I wanted to also censor words that have been disguised like 'ab*d' where 'abcd' is a bad word. This is proving to be a bit more difficult. Here are my questions: Is a badword filter worth bothering with (it is a site for professionals so a certain minimum decorum is required - I would have thought) Is it worth the hustle of trying to capture obvious work arounds like 'f*ck' - or should I not attempt to filter those out. Is there a better way of writing the cleanse_text() method above?

    Read the article

  • Filter a List via Another

    - by user1166905
    I have a requirement to filter a list of Clients based on if they haven't had any jobs booked in the last x months. In my code I have two lists, one is my Clients and the other is a filtered List of Jobs between today and x months ago and the idea is to filter Clients based on their id not appearing in the jobs list. I tried the following: filteredClients.Where(n => jobsToSearch.Count(j => j.Client == n.ClientID) == 0).ToList(); But I seem to get ALL clients regardless. I can easily do a foreach but this severly slows down the process. How can I filter the client list based on the job list effectively?

    Read the article

  • Translate This git_parse_function to zsh?

    - by yar
    I am using this function in Bash function parse_git_branch { git_status="$(git status 2> /dev/null)" pattern="^# On branch ([^${IFS}]*)" if [[ ! ${git_status}} =~ "working directory clean" ]]; then state="*" fi # add an else if or two here if you want to get more specific if [[ ${git_status} =~ ${pattern} ]]; then branch=${BASH_REMATCH[1]} echo "(${branch}${state})" fi } but I'm determined to use zsh. While I can use this perfectly as a shell script (even without a shebang) in my .zshrc the error is a parse error on this line if [[ ! ${git_status}}... What do I need to do to get it ready for zshell? Note: I realize the answer could be "go learn zsh syntax," but I was hoping for a quick hand with this if it's not too difficult.

    Read the article

  • How to filter/sort/rank object model nodes?

    - by BCS
    I have some kind of object model and I need to filter and sort it's nodes for some kind of property. What kinds of automated systems exist to generate and select properties of the object model that correlate to what I want? (I'm intentionally being abstract and non-specific) I'm thinking of a system that works kind of like spam filters or supervised classification systems in that given an example data set it identifies rules that find nodes of interest. However I'm looking for a more general system in that it shouldn't require any design time information about the object model. It should work equality well as a spam filter on e-mail, a bug finder on a code base, an interest filter in a newsgroup or bot accounts finder on a social networking site. As long as it can explore the object model via reflection and be given a set of "interesting" nodes, it should be able to find rules that will find more nodes like them.

    Read the article

  • Apache consuming large memory with Apache Mobile Filter

    - by VarunGupta
    I installed and configured Apache Mobile Filter module for Apache to redirect users to mobile version of our website, depending on their user agents. I configured the module to use WURFL. But as soon as I start Apache it hogs a large amount of memory (without any in-coming web requests): Resident Memory: 300 to 400 MB Virtual Memory: 300 to 650 MB Without this module, Apache was consuming much lesser memory (4 to 10 MB). What could be the reason here?

    Read the article

  • Untar with date filter

    - by Don
    Is there any way to untar and only extract those files that are above a certain date including directory structure?? I restored a backup on a play server but it was a few days old. However I have a tar archive of the entire structure that is more up to date and healthy so now I want to extract all files (including directory structure) based on a date filter on the files if possible?

    Read the article

  • Circumvent proxy filter that disables the download of EXE files

    - by elgrego
    Do you know of a way to download exe files although the web proxy has a filter in place not to allow this? I have searched for a feature web site that does automatic file renaming. That should certainly make it possible. The solution would take a URL and then change the extension so that it would look to my proxy as I was downloading a .dat file (or similar). There are perhaps other solutions to this problem.

    Read the article

  • MS Project 2010 Filter and Highlights

    - by claubervs
    I'd like to know if there is a way to highlight dates that differs from one another. I have two columns "Baseline Finish Date" and "Re-forecast Finish Date" and I would like to highlight the dates that for each task, is different in those columns. Meaning the tasks that suffered a re-forecast due to other circumstances, and does not equal to the original dates. I also would like a filter that does the same thing as above, showing only this different date tasks.

    Read the article

  • How do gitignore exclusion rules actually work?

    - by meowsqueak
    I'm trying to solve a gitignore problem on a large directory structure, but to simplify my question I have reduced it to the following. I have the following directory structure of two files (foo, bar) in a brand new git repository (no commits so far): a/b/c/foo a/b/c/bar Obviously, a 'git status -u' shows: # Untracked files: ... # a/b/c/bar # a/b/c/foo What I want to do is create a .gitignore file that ignores everything inside a/b/c but does not ignore the file 'foo'. If I create a .gitignore thus: c/ Then a 'git status -u' shows both foo and bar as ignored: # Untracked files: ... # .gitignore Which is as I expect. Now if I add an exclusion rule for foo, thus: c/ !foo According to the gitignore manpage, I'd expect this to to work. But it doesn't - it still ignores foo: # Untracked files: ... # .gitignore This doesn't work either: c/ !a/b/c/foo Neither does this: c/* !foo Gives: # Untracked files: ... # .gitignore # a/b/c/bar # a/b/c/foo In that case, although foo is no longer ignored, bar is also not ignored. The order of the rules in .gitignore doesn't seem to matter either. This also doesn't do what I'd expect: a/b/c/ !a/b/c/foo That one ignores both foo and bar. One situation that does work is if I create the file a/b/c/.gitignore and put in there: * !foo But the problem with this is that eventually there will be other subdirectories under a/b/c and I don't want to have to put a separate .gitignore into every single one - I was hoping to create 'project-based' .gitignore files that can sit in the top directory of each project, and cover all the 'standard' subdirectory structure. This also seems to be equivalent: a/b/c/* !a/b/c/foo This might be the closest thing to "working" that I can achieve, but the full relative paths and explicit exceptions need to be stated, which is going to be a pain if I have a lot of files of name 'foo' in different levels of the subdirectory tree. Anyway, either I don't quite understand how exclusion rules work, or they don't work at all when directories (rather than wildcards) are ignored - by a rule ending in a / Can anyone please shed some light on this? Is there a way to make gitignore use something sensible like regular expressions instead of this clumsy shell-based syntax? I'm using and observe this with git-1.6.6.1 on Cygwin/bash3 and git-1.7.1 on Ubuntu/bash3.

    Read the article

  • Why does filter: blur(0) still cause text to blur under Webkit?

    - by johnkavanagh
    I've come across a bug today that's taken far longer than I would like to admit to identify. Essentially: setting a filter: blur(0) (or the vendor-specific -webkit-filter) on an element should - I believe - mean that no form of blur is applied. However, having tested this today, it would appear that Webkit based browsers still blur the text within any element with either blur(0) or blur(0px) assigned to it. I've knocked together a quick Fiddle here: http://jsfiddle.net/f9rBE/ These are three identical dixs containing text (no custom fonts): This has absolutely nothing assigned Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam facilisis orci in quam venenatis, in tempus ipsum sagittis. Suspendisse potenti. Donec ullamcorper lacus vel odio accumsan, vel aliquam libero tempor. Praesent nec libero venenatis, ultrices arcu non, luctus quam. Morbi scelerisque sit amet turpis sit amet tincidunt. Praesent semper erat non purus pretium consequat. Aenean et iaculis turpis. Curabitur diam tellus, consectetur non massa et, commodo venenatis metus. One has no styles at all assigned, the other two have blur(0) and blur(0px): .no-blur{} .zero-px-blur{ -webkit-filter: blur(0px); -moz-filter: blur(0px); -o-filter: blur(0px); -ms-filter: blur(0px); filter: blur(0px); } .zero-blur{ -webkit-filter: blur(0); -moz-filter: blur(0); -o-filter: blur(0); -ms-filter: blur(0); filter: blur(0); } If you preview this under Chrome/Safari you'll see that the text in the second two are still blurred: A few things worth noting: This unintentional blurring occurs in Safari on iOS7 devices (both iPhones and iPads); It also occurs on Chrome and Safari under OSX; It doesn't happen under FireFox in OSX. Of course, this isn't supported at all in Firefox just yet so it's hard to tell whether the behaviour I'm seeing is intentional/expected behaviour, or whether this is a bug in Webkit? Is it possible that this is only prevalent in higher-density resolution devices (ie: retina MacBook/iPhone/iPad)? With this in mind, how do you actually overwrite an item that has blur applied to it to set it back to non-blurred?

    Read the article

  • How to filter rows in JTable based on boolean valued columns?

    - by vinny
    Im trying to filter rows based on a column say c1 that contains boolean values. I want to show only rows that have 'true' in c1. I looked up the examples in http://java.sun.com/docs/books/tutorial/uiswing/components/table.html#sorting. The example uses a regex filter. Is there any way I can use boolean values to filter rows? Following is the code Im using (borrowed from the example) private void filter(boolean show) { RowFilter<TableModel, Object> filter = null; TableModel model = jTb.getModel(); boolean value = (Boolean) model.getValueAt(0,1); //If current expression doesn't parse, don't update. try { // I need to used 'value' to filter instead of filterText. filter =RowFilter.regexFilter(filterText, 0); } catch (java.util.regex.PatternSyntaxException e) { return; } sorter.setRowFilter(filter); } thank you.

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • Version control a content management system?

    - by Mike
    I have the following directory structure in the CMS application we have written: /application /modules /cms /filemanager /block /pages /sitemap /youtube /rss /skin /backend /default /css /js /images /frontend /default /css /js /images Application contains code specific to the current CMS implementation, i.e code for this specific cms. Modules contain reusable portions of code that we share across projects, such as libraries to work with youtube or rss feeds. We include these as git submodules, so that we can update the module in any website and push the changes back across all other projects. It makes it really easy to apply a change to our code and distribute it. We wanted to turn the CMS into a module so we get the same benefit - we can run the entire project under source control, then update the cms as required through a git-submodule. We have run into a problem however: the cms requires javascript/images/css in order for it to work correctly. Things we have thought about: We could create 2 submodules, one for cms-skin and one for cms, but this means you cannot "git pull" one version without having some idea of which versions of skin work with which versions of cms. i.e version 1.2.2 CMS might have issues with 1.0.3 CMS-Skin We could add the skin to the cms module but this has the following problems: Skin should be available on the document root, module code shouldn't be, and if it is it should probably be secured via .htaccess It doesn't seem to make any sense bundling assets with php code We could create a symlink between /skin/backend/ to go to /modules/cms/skin but does this cause any security problems, and do we want to require something like a symlink for the application to work? We could create a hook for git or a shell script that copies files from modules/cms/skin to skin/backend when an update occurs, but this means we lose the ability to edit CMS core files in a project then push them back How is this typically done in large scale cms's? How is it possible to get the source code for a cms under version control, work on the application for a client, then update the sourcecode as releases and given by the vendor? How do applications like Magento or Drupal do this?

    Read the article

  • Using maven-release-plugin to tag and commit to non-origin

    - by Ali G
    When I do a release of my project, I want to share the source with a wider group of people than I normally do during development. The code is shared via a Git repository. To do this, I have used the following: remote public repository - released code is pushed here, every week or so (http://example.com/public) remote private repository - non-release code is pushed here, more than daily (http://example.com/private) In my local git repository, I have the following remotes defined: origin http://example.com/private public http://example.com/public I am currently trying to configure the maven-release-plugin to manage versioning of the builds, and to manage tagging and pushing of code to the public repository. In my pom.xml, I have listed the <scm/ as follows: <scm><connection>scm:git:http://example.com/public</connection></scm> (Removing this line will cause mvn release:prepare to fail) However, when calling mvn release:clean release:prepare release:perform Maven calls git push origin tagname rather than pushing to the URL specified in the POM. So the questions are: Best practice: Should I just be tagging and committing in my private repo (origin), and pushing to public manually? Can I make Maven push to the repository that I choose, rather than defaulting to origin? I felt this was implied by the requirement of the <connection/ element in <scm/.

    Read the article

  • How do I recover from pushing a gitosis.conf file with parsing errors due to line breaks?

    - by Kasia
    I have successfully set up gitosis for an Android mirror (containing multiple git repositories). While adding a new .git path following writable= in gitosis.conf I managed to insert a few line breaks. Saved, committed and pushed to server when I received the following parsing error: Traceback (most recent call last): File "/usr/bin/gitosis-run-hook", line 8, in load_entry_point('gitosis==0.2', 'console_scripts', 'gitosis-run-hook')() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 24, in run return app.main() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 38, in main self.handle_args(parser, cfg, options, args) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 75, in handle_args post_update(cfg, git_dir) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 33, in post_update cfg.read(os.path.join(export, '..', 'gitosis.conf')) File "/usr/lib/python2.5/ConfigParser.py", line 267, in read self._read(fp, filename) File "/usr/lib/python2.5/ConfigParser.py", line 490, in _read raise e ConfigParser.ParsingError: File contains parsing errors: ./gitosis-export/../gitosis.conf (...) I have removed the line break and amendend the commit by git commit -m "fix linebreak" --amend However git push still yields the exact same error. It leads me to believe gitosis is preventing me from doing any further pushes. How do I recover from this?

    Read the article

  • Cannot connect to Github?

    - by user2973438
    so I tried to push some updates onto my repo on github via terminal on Mac OSX 10.8.4 and it doesn't work. I've been getting the same error many times: Lillys-MacBook-Air:Yuewei Lilly$ git push origin master error: Failed connect to github.com:443; Operation timed out while accessing https://github.com/lillybeans/Yuewei.git/info/refs?service=git-receive-pack fatal: HTTP request failed Some background: I've pushed many projects onto github before using terminal (when I was in Canada). I am currently in Shanghai, China, could it be the GFW? But when I was in Beijing, I was able to push projects onto github still. when I do ping github.com: Lillys-MacBook-Air:Yuewei Lilly$ ping github.com PING github.com (192.30.252.131): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 ping: sendto: No route to host Request timeout for icmp_seq 2 ping: sendto: Host is down Request timeout for icmp_seq 3 ping: sendto: Host is down Request timeout for icmp_seq 4 ping: sendto: Host is down Request timeout for icmp_seq 5 ping: sendto: Host is down Request timeout for icmp_seq 6 ping: sendto: Host is down Request timeout for icmp_seq 7 ^C --- github.com ping statistics --- 9 packets transmitted, 0 packets received, 100.0% packet loss Lillys-MacBook-Air:Yuewei Lilly$ I have ShadowSocks (proxy) turned on. Without it I can't access github.com via browser, with it, I can. also when I do "git remote -v" I see both my pull and push remote repos correctly listed. Thank you in advance!

    Read the article

  • Make my git user and apache user have read/write/delete access

    - by Mr A
    I am having permission problems on my server. I use user developer to pull my git repository on the server. Then apache uses its own apache user to do write and execute code. I always have the problems when the app wants to write something in the directory (i.e: log files, and cache ...) if I execute a cron job and it uses my developer rights and wants to add something to the folders that is written by apache. My question is how to have my developer have the same write/delete access as my apache and avoid permission conflicts with each other? I am not fluent on linux command so, it would help if you could provide links or simply examples of doing so. thanks.

    Read the article

  • Bypass spam check for Auth users in postfix

    - by magiza83
    I would like to know if there is any option to "FILTER" auth users in postfix. Let me explain me better, I have the amavis and dspam services between postfix(25) and postfix(10026) but I would like to avoid this check if the users are authenticated. postfix(25)->policyd(10031)->amavis(10024)->postfix(10025)->dspam(dspam.sock)->postfix(10026)--->cyrus | /|\ |________auth users______________________________________________________________| my conf is: main.cf ... smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_sasl_path = smtpd smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination, check_policy_service inet:127.0.0.1:10040, reject_invalid_hostname, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, check_policy_service inet:127.0.0.1:10031, permit_mynetworks, reject ... I would like something like "FILTER smtp:localhost:10026" in case they are authenticated, because in my actual configuration i'm only avoiding policyd, but not amavis and dspam. Thanks.

    Read the article

  • Outlook 2010 Rules on an IMAP Account

    - by Sonny
    I have an IMAP account set up in Outlook 2010. This account receives notices from scheduled tasks. It receives a LOT of messages. I want to filter the messages that contain error reports into a new folder. I created a rule in Outlook that looks for specific text in the message. The problem is that the filter doesn't seem to see the text unless I have already viewed the message. How can I work around this limitation? EDIT This post was originally for Outlook 2003. I am now using Outlook 2010.

    Read the article

  • Google Analytics - Profile filter with more than one dimension?

    - by Drewdavid
    I have identified some bot traffic in GA, but to filter it accurately I need to filter it by two dimensions, namely Browser and ISP To be clear, I don't want to apply both a filter to block the entire ISP and the entire Browser segments, but only the combination of the two To illustrate: Could someone explain how to do this? It's not apparent using the interface and I'm not able to find any documentation about it

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >