Search Results

Search found 32731 results on 1310 pages for 'regex for html'.

Page 186/1310 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Fix YAML syntax highlighting in VIM

    - by Kevin Burke
    The YAML syntax highlighting in Vim 7.3 isn't great. Putting an apostrophe in a line of text triggers quote highlighting even when there's no quote. The same thing happens in other files sometimes too. I've posted a screenshot below. Is there any way to fix this behavior, or is there a different YAML syntax file I can use that won't trigger this behavior? This occurs in both MacVim and Vim in the Terminal. I'm running v7.3. Thanks for your help, Kevin

    Read the article

  • remove words containing non-alpha characters

    - by dnkb
    Given a text file with space separated string and a tab separated integer, I'd ;like to get rid of all words that have non-alpha characters but keep words consisting of alpha only characters and the tab plus the integer afterwards. My attempts like the ones below didin't yield any good. What I was trying to express is something like: "replace anything within word boundaries that starts and ends with 0 or more whatever and there is at least one :digits: or :punct: in between". sed 's/\b.[:digits::punct:]+.\b//g' sed 's/\b.[^:alpha:]+.\b//g' What am I missing? See sample input data below. Thank you! asdf 754m 563 a2a 754mm 291 754n 463 754 ppp 1409 754pin 4652 pin pin 462 754pins 652 754 ppp 1409 754pin 4652 pi$n pin 462 754/p ins 652 754 pp+p 1409 754 p=in 4652

    Read the article

  • Outlook Signature Broken in Entourage

    - by Eric J.
    Some of our company uses Windows with Outlook 2010, and the rest use Mac with Entourage. When our standard signature line is included in an email that goes to Entourage, the result does not display correctly. It appears that Entourage is mangling the HTML. My working theory is that Entourage encounters inline CSS styles it does not know about and stops processing styles, but I'm really not sure. Question: How can I enter a signature into Outlook 2010 that will render correctly in Entourage? For example, can I specify somehow the exact HTML to use? Here's an example of how the HTML is being changed. Original on Outlook, as received by another Outlook client: <span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; color:#1785C5'>My Company<br> </span></b><span class=apple-style-span><span style='font-size:9.0pt; font-family:"Century Gothic","sans-serif";color:#666666'>123 Main St.</span></span><span class=apple-style-span><span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; color:#AFAFAF'>&nbsp;</span></span><span class=apple-style-span><span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif";color:#666666'>Suite 100</span></span> Note the use of spans, color #1785C5 and color #666666. Same original email, as displayed in an Entourage client: <span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; mso-fareast-font-family:"Times New Roman"'><br> <span style='color:#656565'>My Company<br> 123 Main St Suite 100<br> </span> Note the use of br tags rather than spans, and the color #656565.

    Read the article

  • Indentation-based Folding for TextMate

    - by Craig Walker
    SASS and HAML have indentation-based syntax, much like Python. Blocks of related code have the same number of spaces at the start of a line. Here's some example code: #drawer height: 100% color: #c2c7c4 font: size: 10px .slider overflow: hidden height: 100% .edge background: url('/images/foo') repeat-y .tab margin-top = !drawer_top width: 56px height: 161px display: block I'm using phuibonhoa's SASS bundle, and I'd like to enhance it so that the various sections can fold. For instance, I'd like to fold everything under #drawer, everything under .slider, everything under .edge, etc. The bundle currently includes the following folding code: foldingStartMarker = '/\*|^#|^\*|^\b|^\.'; foldingStopMarker = '\*/|^\s*$'; How can I enhance this to fold similarly-indented blocks?

    Read the article

  • Mechanism behind user forwarding in ScriptAliasMatch

    - by jolivier
    I am following this tutorial to setup gitolite and at some point the following ScriptAliasMatch is used: ScriptAliasMatch \ "(?x)^/(.*/(HEAD | \ info/refs | \ objects/(info/[^/]+ | \ [0-9a-f]{2}/[0-9a-f]{38} | \ pack/pack-[0-9a-f]{40}\.(pack|idx)) | \ git-(upload|receive)-pack))$" \ /var/www/bin/gitolite-suexec-wrapper.sh/$1 And the target script starts with USER=$1 So I am guessing this is used to forward the user name from apache to the suexec script (which indeed requires it). But I cannot see how this is done. The ScriptAliasMatch documentation makes me think that the /$1 will be replaced by the first matching group of the regexp before it. For me it captures from (?x)^/(.* to ))$ so there is nothing about a user here. My underlying problem is that USER is empty in my script so I get no authorizations in gitolite. I give my username to apache via a basic authentication: <Location /> # Crowd auth AuthType Basic AuthName "Git repositories" ... Require valid-user </Location> defined just under the previous ScriptAliasMatch. So I am really wondering how this is supposed to work and what part of the mechanism I missed so that I don't retrieve the user in my script.

    Read the article

  • Check if folders exist in Git repository... testing if a sub-string exists in bash with NULL as a separator

    - by Craig Francis
    I have a common git "post-receive" script for several projects, and it needs to perform different actions if an /app/ or /public/ folder exists in the root. Using: FOLDERS=`git ls-tree -d --name-only -z master`; I can see the directory listing, and I would like to use the RegExp support in bash to run something like: if [[ "$FOLDERS" =~ app ]]; then ... fi But that won't work if there was something like an "app lication" folder... I specified the "-z" option in the git "ls-tree" command so I could use the \0 (null) character as a separator, but not sure how to test for that in the bash RegExp. Likewise I know there is support for specifying a particular path in the ls-tree command, and could then pipe that to "wc -l", but I'd have thought it was quicker to get a full directory listing of the root (not recursive) then test for the 2 (or more) folders with the returned output. Possibly related to: http://stackoverflow.com/questions/7938094/git-how-to-check-which-files-exist-and-their-content-in-a-shared-bare-repos

    Read the article

  • Is it possible for the Subversion Apache module to serve html files with an html content-type without using the svn:mime-type property?

    - by Martin Pain
    I am aware that if you set the svn:mime-type Subversion property on a .html file to text/html then when viewing the file in a browser through the Subversion module in Apache httpd it will be served with a Content-Type: text/html header, enabling the browser to render it as HTML rather than plain text. However, I am looking for a way to do this without using the svn:mime-type property. I'm aware that you can configure your svn client to automatically add the property - this is not what I want, as I do not want to ensure all users have these settings. I'm also aware that I could create a pre-commit hook that rejects the commit if the properties are not set, in order to force users to set the property - I might fall back to that, but I'm looking for something less intrusive. I'm also aware that I could use a post-commit hook to add the properties automatically on the server-side. I'd rather not do that (as users then have to update immediately after their commit, and it's not trivial to write) - I'm looking for a better alternative. Perhaps something with rewrite rules in the Apache server?

    Read the article

  • Add constant value to numeric XML attribute

    - by Dave Jarvis
    Background Add a constant value to numbers matched with a regular expression, using vim (gvim). Problem The following regular expression will match width="32": /width="\([0-9]\{2\}\)" Question How do you replace the numeric value of the width attribute with the results from a mathematical expression that uses the attribute's value? For example, I would like to perform the following global replacement: :%s/width="\([0-9]\{2\}\)"/width="\1+10"/g That would produce width="42" for width="32" and width="105" for width="95". Thank you!

    Read the article

  • Regular Expression to replace part of URL in XML file

    - by Richie086
    I need a regular expression in Notepad++ to search/replace a string. My document (xml) has serveral thousand lines that look similar to this: <Url Source="Output/username/project/Content/Volume1VolumeName/TopicFileName.htm" /> I need to replace everything starting from Volume1 to .htm" / to replaced with X's or some other character to mask the actual file names in this file. So the resulting string would look like this after the search/replace was performed: <Url Source="Output/username/project/Content/Volume1XxxxxxXxxx/XxxxxXxxxXxxx.htm" /> I am working with confidential information that I cannot release to people outside of my company, but i need to send an example log file to a 3rd party for troubleshooting purposes. FYI the X's do not need to follow the upper/lower case after the replacement, i was just using different case X's for the hell of it :)

    Read the article

  • Word Find - find any highlighted text that starts with a squared bracket

    - by user2953311
    Is there a way to Find highlighted text that ONLY begins with a open square bracket? I've tried using the square bracket as a wildcard, but it won't find any adjoining words. For example, I have a document containing conditional paragraphs, in squared brackets, with the "name" of the paragraph highlighted at the beginning: "[Document to return Thank you for sending the documents requested earlier.]" (the section in bold is highlighted in blue in Word) Is there a way to find "[Document to return"? I hope this makes sense Thanks in advance

    Read the article

  • How to delete files on the command line with regular expressions?

    - by Jack
    Lets say I have 20 files named FOOXX, where XX is the number of the file, eg 01, 02 etc. At the moment, if I want to delete all files lower than the number 10, this is easy and I just use a wildcard, eg rm FOO0* However, if I want to delete specific files ina range, eg 13-15, this becomes more difficult. rm FPP[13-15] does not work, and asks me if I wish to delete all files. Likewse rm FOO1[3-5] wishes to delete all files that begin with FOO1 So, what is the best way to delete ranges of files like this? I have tried with both bash and zsh, and I don't think they differ so much for such a basic task?

    Read the article

  • Is It Possible To Self-Teach PHP, Wordpress, CentOS (Linux), Apache, Nginx etc?

    - by Aahan
    consider me a total noob, who uses a Windows PC and has never touched Linux. But I want to administer, manage and take responsibility of my server, at least at some point, if not now. But since I am a full-time blogger I am unable to find time to study at an institute. So, here is my question — - Is It Possible To Self-Teach HTML, CSS, PHP, JavaScript, Wordpress, CentOS (or for that matter any Linux distro), Apache, Nginx, and Varnish? Yes, beginning with HTML, absolutely all of them. I might seem overly ambitious and foolish, but I just want to do it. Aren't there any self-taught server admins? (1) Please help me out with the names of good books, links and whatever you can. (2) How long would it take me to get there (approximately)? 3 years? 5 years? (I have good touch with HTML & Wordpress.) This is a great community, I hope at least some of you will shoot some suggestions at me.

    Read the article

  • cut text from each line in a txt file

    - by bboyreason
    i have a text file where each line looks like this: <img border=0 width=555 height=555 src=http://websitelinkimagelinkhere> each line is like that for like 1500 lines, i want to sort of 'grep' (i dont think that will work because it returns the whole line) each line for 'http://websiteimagelinkhere' output file should have newlines or tabs after each image link, like the original file. or if someone only knows a way to do this with each element being in a cell of the same column that would be okay too.

    Read the article

  • How to combine RewriteRule of index.php and queries rewrite and avoid Server Error 404?

    - by Binyamin
    Both RewriteRule's works fine, except when used together. 1.Remove all queries except query ?callback=.*: # /api?callback=foo has no rewrite # /whatever?whatever=foo has 301 redirect /whatever RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^?#\ ]*)\?[^\ ]*\ HTTP/ [NC] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule .*$ %{REQUEST_URI}? [R=301,L] 2.Rewrite index.php queries api and url=$1: # /api returns data index.php?api&url= # /api/whatever returns data index.php?api&url=whatever RewriteRule ^api(?:/([^/]*))?$ index.php?api&url=$1 [QSA,L] RewriteRule ^([^.]*)$ index.php?url=$1 [QSA,L] Any valid combination to this RewriteRule's on keeping its functionality? This combination will return Server Error 404 to /api/?callback=foo: # Remove all queries except query "callback" RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^?#\ ]*)\?[^\ ]*\ HTTP/ [NC] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule .*$ %{REQUEST_URI}? [R=301,L] # Rewrite index.php queries RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* # Server Error 404 on /api/?callback=foo and /api/whatever?callback=foo RewriteRule ^api(?:/([^/]*))?$ index.php?api&url=$1 [QSA,L] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule ^([^.]*)$ index.php?url=$1 [QSA,L]

    Read the article

  • sed syntax to remove xml

    - by mjb
    I'm trying to sanitize this output from it's metadata to plug this output into GreekTools, but I am getting stuck on sed. curl --silent www.brainyquote.com | egrep '(span class="body")|(span class="bodybold")' | sed -n '6p; 7p; ' | sed 's/\<*\>//g' [ex] <span class="body">Literature is news that stays news.</span><br> <span class="bodybold">Ezra Pound</span> Could someone help me along on this track?

    Read the article

  • How to search a text file for strings between two tokens in Ubuntu terminal and save the output?

    - by Blue
    How can I search a text file for this pattern in Ubuntu terminal and save the output as a text file? I'm looking for everything between the string "abc" and the string "cde" in a long list of data. For example: blah blah abc fkdljgn cde blah blah blah blah blah blah abc skdjfn cde blah In the example above I would be looking for an output such as this: fkdljgn skdjfn It is important that I can also save the data output as a text file. Can I use grep or agrep and if so, what is the format?

    Read the article

  • how to substitude in multiple lines between {{{ and }}} with sed or awk

    - by chris
    First give out the text example: .... text ,.. {{{python string1 = 'abcde' string2 = '12345' print(string1[[1:3]]) print(string2[[:-1]]) }}} .... text ,.. the [[ and ]] happened outside of {{{ too. And maybe there is spaces and tabs before {{{ and }}}. I want to substitude all [[ and ]] into [ and ] between {{{ and }}}. NOTICE: I need to write the result back to original file. ( Maybe sed or awk is not the only way to do this ? )

    Read the article

  • Notepad++ Search & Replace with Regular Expressions

    - by Jeremy
    I know its simple, but I can't get it to work... I have a strings like {span style="display:none"}123{/span} and {span style="display:none"}456{/span} and {span style="display:none"}789{/span} in a file. I want to remove all of these string. So, I thought a simple regular expression replace in NotePad++ should be like {span style="display:none"}[(.)]{/span} but, this is not working. Thank for your help!

    Read the article

  • Search for specific call in asterisk log files

    - by chiborg
    In my Asterisk log file, I have a line like this (truncated): Executing [123@mycontext:1] Set("SIP/myhost-b7111840", "__INCOMINGCLI=4711") Now I want to do the following filtering while looking at the log file with tail -f: Match lines with a specific value for "INCOMINGCLI", storing the call ID (the "SIP/myhost-b7111840" part) Output all subsequent lines that contain the call ID. As a bonus, having a grep-like option like -A would be nice. I could do that easily in various programming languages, but how would I do it with standard UNIX commands like sed or awk? Can it be done with these commands?

    Read the article

  • Checking version of Applications installed in ~/Applications with unknown username

    - by ridogi
    I'd like to check the version of Firefox through Apple Remote Desktop of all managed computers. I have written this, but it only checks for Firefox in /Applications /bin/cat /Applications/Firefox.app/Contents/Info.plist | grep -A 1 CFBundleShortVersionString | grep string | sed 's/[/]//' | sed 's/<string>//g' For standard users Firefox auto update breaks if it is in /Applications so I instead have it installed in ~/Applications I'd like to check that copy (if it exists), but I can't specify the path in the command since it is unique to each computer. For example: /Users/jon/Applications/Firefox.app /Users/arya/Applications/Firefox.app Presumably I want to use find and pipe the result to my command. This should work for 10.6 through 10.8

    Read the article

  • Debugging nginx URL rewrite: How do I figure out where the problem is?

    - by pjmorse
    I have a specific URL pattern on a site which needs to be redirected to the HTTPS version. This is a Django site; Nginx checks each URL in memcached, and if it doesn't find a cached version it proxies the request to Apache/mod_python for Django to render the page. The relevant configuration block is rewrite ^/certificate https://mysite.com/certificate ; rewrite ^/([a-zA-Z]{2})/certificate https://mysite.com/certificate ; ...and it doesn't appear to be working at all. Nginx is: $ nginx -V nginx version: nginx/0.7.65 built by gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) TLS SNI support disabled configure arguments: --prefix=/usr/local/nginx --pid-path=/var/run/nginx.pid --with-http_gzip_static_module --with-http_ssl_module How can I figure out if the problem is my patterns not matching, or a more obscure configuration problem? (The site is localized to three languages, and the localization is in the URL string, e.g. /US/news/, /DE/about, etc. It tracks localization in the session as well, defaulting to US, so if you just requested /news Django will rewrite to /US/news unless the user has a cookie indicating they're using a different localization. Django handles this, though, not Nginx.)

    Read the article

  • Trouble Letting Users Get to Certain Sites through Squid Proxy

    - by armani
    We have Squid running on a RHEL server. We want to block users from getting to Facebook, other than a couple specific sites, like our organization's page. Unfortunately, I can't get those specific pages unblocked without allowing ALL of Facebook through. [squid.conf] # Local users: acl local_c src 192.168.0.0/16 # HTTP & HTTPS: acl Safe_ports port 80 443 # File containing blocked sites, including Facebook: acl blocked dst_dom_regex "/etc/squid/blocked_content" # Whitelist: acl whitelist url_regex "/etc/squid/whitelist" # I do know that order matters: http_access allow local_c whitelist http_access allow local_c !blocked http_access deny all [blocked_content] .porn_site.com .porn_site_2.com [...] facebook.com [whitelist] facebook.com/pages/Our-Organization/2828242522 facebook.com/OurOrganization facebook.com/media/set/ facebook.com/photo.php www.facebook.com/OurOrganization My biggest weakness is regular expressions, so I'm not 100% sure about if this is all correct. If I remove the "!blocked" part of the http_access rule, all of Facebook works. If I remove "facebook.com" from the blocked_content file, all of Facebook works. Right now, visiting facebook.com/OurOrganization gives a "The website declined to show this webpage / HTTP 403" error in Internet Explorer, and "Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED): Unknown error" in Chrome. WhereGoes.com tells me the URL redirects for that URL goes like this: facebook.com/OurOrganization -- [301 Redirect] -- http://www.facebook.com/OurOrganization -- [302 Redirect] -- https://www.facebook.com/OurOrganization I tried turning up the debug traffic out of squid using "debug_options ALL,6" but I can't narrow anything down in /var/log/access.log and /var/log/cache.log. I know to issue "squid -k reconfigure" whenever I make changes to any files.

    Read the article

  • Zabbix doesn't update value from file neither with log[] nor with vfs.file.regexp[] item

    - by tymik
    I am using Zabbix 2.2. I have a very specific environment, where I have to generate desired data to file via script, then upload that file to ftp from host and download it to Zabbix server from ftp. After file is downloaded, I check it with log[] and vfs.file.regexp[] items. I use these items as below: log[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,"all",\1] vfs.file.regexp[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,,\1] The line I am parsing looks like below: C: 8195Mb 5879Mb 2316Mb 28.2 The value I want to extract is 28.2 at the end of file. The problem I am currently trying to solve is that when I update the file (upload from host to ftp, then download from ftp to Zabbix server), the value does not update. I was trying only log[] at start, but I suspect, that log[] treat the file as real log file and doesn't check the same lines (althought, following the documentation, it should with "all" value), so I added vfs.file.regexp[] item too. The log[] has received a value in past, but it doesn't update. The vfs.file.regexp[] hasn't received any value so far. file.txt has got reuploaded and redownloaded several times and situation doesn't change. It seems that log[] reads only new lines in the file, it doesn't check lines already caught if there are any changes. The zabbix_agentd.log file doesn't report any problem with access to file, nor with regexp construction (it did report "unsupported" for log[] key, when I had something set up wrong). I use debug logging level for agent - I haven't found any interesting info about that problem. I have no idea what I might be doing wrong or what I do not know about how Zabbix is performing these checks. I see 2 solutions for that: adding more lines to the file instead of making new one or making new files and check them with logrt[], but those doesn't satisfy my desires. Any help is greatly appreciated. Of course I will provide additional information, if requested - for now I don't know what else might be useful.

    Read the article

  • sed: replace only the first range of numbers

    - by Marit Hoen
    Imagine I have an input file like this: INSERT INTO video_item_theme VALUES('9', '29'); INSERT INTO video_item_theme VALUES('19', '312'); INSERT INTO video_item_theme VALUES('414', '1'); And I wish to add 10000 to only the first range of numbers, so I end up with something like this: INSERT INTO video_item_theme VALUES('10009', '29'); INSERT INTO video_item_theme VALUES('10019', '312'); INSERT INTO video_item_theme VALUES('10414', '1'); My approach would be to prefix "1000" to one digit numbers, "100" Something like...: sed 's/[0-9]\{2\}/10&/g' ... isn't very helpful, since it changes each occurance of two numbers, not only in the first occurance of numbers: INSERT INTO video_item_theme VALUES('9', '10029'); INSERT INTO video_item_theme VALUES('10019', '100312'); INSERT INTO video_item_theme VALUES('100414', '1');

    Read the article

  • How do I make sdiff ignore the * character?

    - by Runcible
    Here's what I'm sure is an easy one, but I can't figure it out. I have two files: file1: You are in a maze of twisty little passages, all alike file2: You are in a maze of twisty little* passages, all alike I want to perform sdiff on these files, but I want to ignore the * character. How do I do this?

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >