Search Results

Search found 540 results on 22 pages for 'whitespace'.

Page 9/22 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Removing final bash script argument

    - by ctuffli
    I'm trying to write a script that searches a directory for files and greps for a pattern. Something similar to the below except the find expression is much more complicated (excludes particular directories and files). #!/bin/bash if [ -d "${!#}" ] then path=${!#} else path="." fi find $path -print0 | xargs -0 grep "$@" Obviously, the above doesn't work because "$@" still contains the path. I've tried variants of building up an argument list by iterating over all the arguments to exclude path such as args=${@%$path} find $path -print0 | xargs -0 grep "$path" or whitespace="[[:space:]]" args="" for i in "${@%$path}" do # handle the NULL case if [ ! "$i" ] then continue # quote any arguments containing white-space elif [[ $i =~ $whitespace ]] then args="$args \"$i\"" else args="$args $i" fi done find $path -print0 | xargs -0 grep --color "$args" but these fail with quoted input. For example, # ./find.sh -i "some quoted string" grep: quoted: No such file or directory grep: string: No such file or directory Note that if $@ doesn't contain the path, the first script does do what I want.

    Read the article

  • Copy and pasting code into the Python interpreter

    - by wpeters
    There is a snippet of code that I would like to copy and paste into my Python interpreter. Unfortunately due to Python's sensitivity to whitespace it is not straightforward to copy and paste it a way that makes sense. (I think the whitespace gets mangled) Is there a better way? Maybe I can load the snippet from a file. This is just an small example but if there is a lot of code I would like to avoid typing everything from the definition of the function or copy and pasting line by line. class bcolors: HEADER = '\033[95m' OKBLUE = '\033[94m' OKGREEN = '\033[92m' WARNING = '\033[93m' FAIL = '\033[91m' ENDC = '\033[0m' def disable(self): self.HEADER = '' # I think stuff gets mangled because of the extra level of indentation self.OKBLUE = '' self.OKGREEN = '' self.WARNING = '' self.FAIL = '' self.ENDC = ''

    Read the article

  • Java - Reg. Ex. File Question

    - by aloh
    I'm grabbing lines from a text file and sifting line by line using regular expressions. I'm trying to search for blank lines, meaning nothing or just whitespace. However, what exactly is empty space? I know that whitespace is \s but what is a line that is nothing at all? null (\0)? newline (\n)? I tried the test harness in the Java tutorial to try and test to see what an empty space is but no luck so far.

    Read the article

  • How to remove [^a-z\s] in C++

    - by Steven
    So far I have: int SimplifyText(char chars[], int length) { //To lower for(int i=0; i<length; i++) { chars[i] = tolower(chars[i]); } This function simplifies the text in the first argument which is an array containing the number of characters as given in the second argument The requirements are: tolower all remove all non-alpha characters replace multiple whitespace by one blank. Any leading whitespace at the beginning of the array should be removed completely. The resulting number of characters should be returned as the value of the function. And the annoying part: Another array cannot appear in the function Cannot use strings, only char arrays. Cannot using G++'s extension for setting an array size using a variable. Oh and can't use regex :) I'm stuck with this, any help would be great. :)

    Read the article

  • Making fscanf Ignore Optional Parameter

    - by adi92
    I am using fscanf to read a file which has lines like Number <-whitespace- string <-whitespace- optional_3rd_column I wish to extract the number and string out of each column, but ignore the 3rd_column if it exists Example Data: 12 foo something 03 bar 24 something #randomcomment I would want to extract 12,foo; 03,bar; 24, something while ignoring "something" and "#randomcomment" I currently have something like while(scanf("%d %s %*s",&num,&word)>=2) { assign stuff } However this does not work with lines with no 3rd column. How can I make it ignore everything after the 2nd string?

    Read the article

  • Removing repeated characters, including spaces, in one line

    - by Thumper
    I currently have a string, say $line='55.25040882, 3,,,,,,', that I want to remove all whitespace and repeated commas and periods from. Currently, I have: $line =~ s/[.,]{2,}//; $line =~ s/\s{1,}//; Which works, as I get '55.25040882,3', but when I try $line =~ s/[.,\s]{2,}//; It pulls out the ", " and leaves the ",,,,,,". I want to retain the first comma and just get rid of the whitespace. Is there a way to elegantly do this with one line of regex? Please let me know if I need to provide additional information.

    Read the article

  • extending Bash tab completion: how to handle paths / partial commands?

    - by Rawing
    I've added tab completion for my program to bash. It works quite well, but I don't know how to handle partial completion of words. Example: the user types ./program /home/user/De and presses TAB. This is then completed to ./program /home/user/Desktop/ , but there's now a trailing whitespace after 'Desktop/', which is not what I want. Basically, I need a way of preventing bash from adding whitespace after the completed word. I don't want to use bash's completion for paths.

    Read the article

  • Using Alt-select in SSMS, Word, and elsewhere

    - by John Paul Cook
    A surprising number of database people and Windows users in general don’t know about Alt select . This is a Windows technique not unique to SSMS that allows a user to select an arbitrary rectangular region of text and delete it, cut it, or copy it. Where I find Alt select particularly useful in SSMS is when I have a bunch of inline comments that are too far to the right. I want to delete much of the whitespace in front of them to move them to the left without disturbing any of the rest of the T-SQL....(read more)

    Read the article

  • PhpMyAdmin import/export - strange character encoding issues.

    - by John Hunt
    Hello, I'm migrating a site to a new host, and there are a couple of databases on there. There's no SSH access so I'm stuck with phpmyadmin. The issue is that certain characters (namely just whitespace) seems to being corrupt on the new site (same html, and apache doesn't seem to be messing with any encodings - you can see the strange characters have changed when I use less on my linux machine after downloading a table dump from both servers.) The issue isn't as bad if I import into the new database as utf-8 - whitespace characters only have one funny A type symbol instead of two. I've been trying various combinations of character encoding etc to no avail. Exporting from: phpMyAdmin 2.6.2 MySQL 4.1.20 MySQL connection collation: utf8_general_ci MySQL charset: UTF-8 Unicode (utf8) Collation on tables and their fields is: latin1_swedish_ci Importing to: phpMyAdmin - 2.11.9.2 MySQL client version: 5.0.45 MySQL charset: UTF-8 Unicode (utf8) MySQL connection collation: utf8_general_ci The import sql has this kind of thing in it: ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=192 ; I get the impression this is actually a bug or something with mysqldump as nothing seems to work.. does anyone have any insight into this? Cheers, John.

    Read the article

  • Exchange 2010 SP2 database size

    - by Chad
    I have a single Exchange 2010 sp2 environment with 3 DB stores. I am trying to reduce the sizes by moving the mailboxes to a spare DB and then deleting the empty database. I cleaned up the users mailboxes to reduce the sizes and set the retention periods to 1 day each and waited several days before moving mailboxes. The databases are backing up fine and clearing logs files but when I move the mailboxes I noticed they were taking a long time, even though some were less than 100MB. When I checked the new database size it seems like the orginal mailbox size might be moving (1GB instead of 100MB). Exchange is showing the expected smaller mailbox sizes when I run get-mailbox statistics against the DB. So if I have 5 mailboxes 100MB each it is showing like 3GB instead of around 500MB, and no whitespace. I keep waiting thinking mailby the retention period is not expired yet but it is much longer than 1 day already. I am setting them both to 0 today to see if that works. What am I missing to get the combined mailbox sizes to match the DB size minus whitespace?

    Read the article

  • Changes in Language Punctuation [closed]

    - by Wes Miller
    More social curiosity than actual programming question... (I got shot for posting this on Stack Overflow. They sent me here. At least i hope here is where they meant.) Based on the few responses I got before the content police ran me off Stack Overflow, I should note that I am legally blind and neatness and consistency in programming are my best friends. A thousand years ago when I took my first programming class (Fortran 66) and a mere 500 years ago when I tokk my first C and C++ classes, there were some pretty standard punctuation practices across languages. I saw them in Basic (shudder), PL/1, PL/AS, Rexx even Pascal. Ok, APL2 is not part of this discussion. Each language has its own peculiar punctuation. Pascal's periods, Fortran's comma separated do loops, almost everybody else's semicolons. As I learned it, each language also has KEYWORDS (if, for, do, while, until, etc.) which are set off by whitespace (or the left margin) if, etc. Each language has function, subroutines of whatever they're called. Some built-in some user coded. They were set off by function_name( parameters );. As in sqrt( x ) or rand( y ); Lately, there seems to be a new set of punctuation rules. Especially in c++ where initializers get glued onto the end of variable declarations int x(0); or auto_ptr p(new gizmo); This usually, briefly fools me into thinking someone is declaring a function prototype or using a function as a integer. Then "if" and 'for' seems to have grown parens; if(true) for(;;), etc. Since when did keywords become functions. I realize some people think they ARE functions with iterators as parameters. But if "for" is a function, where did the arg separating commas go? And finally, functions seem to have shed their parens; sqrt (2) select (...) I know, I koow, loosening whitespace rules is good. Keep reading. Question: when did the old ways disappear and this new way come into vogue? Does anyone besides me find it irritating to read and that the information that the placement of punctuation used to convey is gone? I know full well that K&R put the { at the end of the "if" or "for" to save a byte here and there. Can't use that excuse here. Space as an excuse for loss of readability died as HDD space soared past 100 MiB. Your thoughts are solicited. If there is a good reason to do this, I'll gladly learn it and maybe in another 50 years I'll get used to it. Of course it's good that compilers recognize these (IMHO) typos and keep right on going, but just because you CAN code it that way doesn't mean you HAVE to, right?

    Read the article

  • TeX Font Mapping

    - by reprogrammer
    I am using a package written on top of XeLaTeX. This package uses fontspec to specify fonts for different parts of your text: latin, non-latin, math mode, ... The package comes with several sample files. I was able to xelatex most of them that depend on regular ttf or otf files. However, one of them tries to set the font of digits in math mode to some font, say "NonLatin Digits". But, the font doesn't seem to be a regular font. There are two files in the same directory called "nonlatindigits.map" and "nonlatindigits.tec". TECkit uses these mapping files to generate TeX fonts. However, for some reason it fails to create the files, and xelatex issues the following error message. kpathsea: Invalid fontname `NonLatin Digits', contains ' ' ! Font \zf@basefont="NonLatin Digits" at 10.0pt not loadable: Metric (TFM) file or installed font not found. The kpathsea program complains about the whitespace, but removing the whitespace does solve the problem with loading the TFM file. Any clues what I am doing wrong?

    Read the article

  • Text editor capable of viewing invisibles?

    - by Timo
    A recent problem* left me wondering whether there is a text editor out there that lets you see every single character of the file, even if they are invisible? Specifically, I'm not looking for hex editing capabilities, I am interested in a text editor that'll show me all of the invisible characters (not just the common whitespace / line break characters). The BOM marker is just one example, others are e.g. mathematical invisibles or possibly unsupported characters. I'm not looking for a text editor that simply supports a large variety of text encoding / translations between encodings. All text editors I've come across treat the invisible characters correctly i.e. leave them invisible (or simply get removed in the translation as in the case of the BOM marker). I'm asking this mostly out of academic interests, so I'm not particular about any specific OS. I can easily test Linux and OSX solutions, but if you recommend a Windows editor, I would appreciate if you include descriptions of how the editor handles invisibles other than whitespace / line breaks. *The incident that lead me to this question: I wrote a perl script using TextWrangler and managed to change the encoding to UTF8 BOM, which inserts te BOM marker at the start of the file. Perl (or rather the operating system) promptly misses the #! and mayhem ensues. It then took me the better part of an afternoon to figure this out since most text editors do not show the BOM marker even with various "show invisibles" options turned on. Now I've learned my lesson and will use less immediately :-).

    Read the article

  • Help with fql.multiQuery

    - by Daniel Schaffer
    I'm playing around with the Facebook API's fql.multiQuery method. I'm just using the API Test Console, and trying to get a successful response but can't seem to figure out exactly what it wants. Here's the text I'm entering into the "queries" field: {"tags" : "select subject from photo_tag where subject != 601599551 and pid in ( select pid from photo_tag where subject = 601599551 ) and subject in ( select uid2 from friend where uid1 = 601599551 )", "foo" : "select uid from user where uid = 601599551"} All it'll give me is a queries parameter: array expected. error. I've also tried just about every permutation I could think of involving wrapping the name/query pairs in their own curly braces, adding brackets, adding whitespace, removing whitespace in case it didn't want an associative array (for those watching the edits, I just found out about these wonderful things now... oy), all to no avail. Is there something painfully obvious I'm missing here, or do I need to make like Chuck Norris Jon Skeet and simply will it to do my bidding? Update: A note to anyone finding this question now: The fql.multiquery test console appears to be broken. You can test your query by clicking on the generated url in the test console and manually adding the "queries" parameter into the querystring.

    Read the article

  • Can I write this regex in one step?

    - by Marin Doric
    This is the input string "23x +y-34 x + y+21x - 3y2-3x-y+2". I want to surround every '+' and '-' character with whitespaces but only if they are not allready sourrounded from left or right side. So my input string would look like this "23x + y - 34 x + y + 21x - 3y2 - 3x - y + 2". I wrote this code that does the job: Regex reg1 = new Regex(@"\+(?! )|\-(?! )"); input = reg1.Replace(input, delegate(Match m) { return m.Value + " "; }); Regex reg2 = new Regex(@"(?<! )\+|(?<! )\-"); input = reg2.Replace(input, delegate(Match m) { return " " + m.Value; }); explanation: reg1 // Match '+' followed by any character not ' ' (whitespace) or same thing for '-' reg2 // Same thing only that I match '+' or '-' not preceding by ' '(whitespace) delegate 1 and 2 just insert " " before and after m.Value ( match value ) Question is, is there a way to create just one regex and just one delegate? i.e. do this job in one step? I am a new to regex and I want to learn efficient way.

    Read the article

  • Twitter Typeahead only shows only 5 results

    - by user3685388
    I'm using the Twitter Typeahead version 0.10.2 autocomplete but I'm only receiving 5 results from my JSON result set. I can have 20 or more results but only 5 are shown. What am I doing wrong? var engine = new Bloodhound({ name: "blackboard-names", prefetch: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false } }, remote: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false }, }, datumTokenizer: Bloodhound.tokenizers.obj.whitespace('value'), queryTokenizer: Bloodhound.tokenizers.whitespace }); var promise = engine.initialize(); promise .done(function() { console.log("done"); }) .fail(function() { console.log("fail"); }); $("#Impersonate").typeahead({ minLength: 2, highlight: true}, { name: "blackboard-names", displayKey: 'value', source: engine.ttAdapter() }).bind("typeahead:selected", function(obj, datum, name) { console.log(obj, datum, name); alert(datum.id); }); Data: [ { "id": "1", "value": "Adams, Abigail", "tokens": [ "Adams", "A", "Ad", "Ada", "Abigail", "A", "Ab", "Abi" ] }, { "id": "2", "value": "Adams, Alan", "tokens": [ "Adams", "A", "Ad", "Ada", "Alan", "A", "Al", "Ala" ] }, { "id": "3", "value": "Adams, Alison", "tokens": [ "Adams", "A", "Ad", "Ada", "Alison", "A", "Al", "Ali" ] }, { "id": "4", "value": "Adams, Amber", "tokens": [ "Adams", "A", "Ad", "Ada", "Amber", "A", "Am", "Amb" ] }, { "id": "5", "value": "Adams, Amelia", "tokens": [ "Adams", "A", "Ad", "Ada", "Amelia", "A", "Am", "Ame" ] }, { "id": "6", "value": "Adams, Arik", "tokens": [ "Adams", "A", "Ad", "Ada", "Arik", "A", "Ar", "Ari" ] }, { "id": "7", "value": "Adams, Ashele", "tokens": [ "Adams", "A", "Ad", "Ada", "Ashele", "A", "As", "Ash" ] }, { "id": "8", "value": "Adams, Brady", "tokens": [ "Adams", "A", "Ad", "Ada", "Brady", "B", "Br", "Bra" ] }, { "id": "9", "value": "Adams, Brandon", "tokens": [ "Adams", "A", "Ad", "Ada", "Brandon", "B", "Br", "Bra" ] } ]

    Read the article

  • Collision Attacks, Message Digests and a Possible solution

    - by Dominar
    I've been doing some preliminary research in the area of message digests. Specifically collision attacks of cryptographic hash functions such as MD5 and SHA-1, such as the Postscript example and X.509 certificate duplicate. From what I can tell in the case of the postscript attack, specific data was generated and embedded within the header of the postscript (which is ignored during rendering) which brought about the internal state of the md5 to a state such that the modified wording of the document would lead to a final MD equivalent to the original. The X.509 took a similar approach where by data was injected within the comment/whitespace of the certificate. Ok so here is my question, and I can't seem to find anyone asking this question: Why isn't the length of ONLY the data being consumed added as a final block to the MD calculation? In the case of X.509 - Why is the whitespace and comments being taken into account as part of the MD? Wouldn't a simple processes such as one of the following be enough to resolve the proposed collision attacks: MD(M + |M|) = xyz MD(M + |M| + |M| * magicseed_0 +...+ |M| * magicseed_n) = xyz where : M : is the message |M| : size of the message MD : is the message digest function (eg: md5, sha, whirlpool etc) xyz : is the acutal message digest value for the message M magicseed_{i}: Is a set random values generated with seed based on the internal-state prior to the size being added. This technqiue should work, as to date all such collision attacks rely on adding more data to the original message. In short, the level of difficulty involved in generating a collision message such that: It not only generates the same MD But is also comprehensible/parsible/compliant and is also the same size as the original message, is immensely difficult if not near impossible. Has this approach ever been discussed? Any links to papers etc would be nice.

    Read the article

  • latex padding / margin hell

    - by darren
    hi everyone I have been wrestling with a latex table for far too long. I need a table that has has centered headers, and body cells that contain text that may wrap around. Because of the wrap-around requirement, i'm using p{xxx} instead of l for specifying cell widths. The problem this causes is that cell contents are not left justified, so the look like spaced-out junk. To fix this problem I'm using \flushleft for each cell. This does left justify contents, but puts in a ton of white space above and below the contents of the cell. Is there a way to stop \flushleft (or \center for that matter) to stop adding copious amounts of verical whitespace? thanks \begin{landscape} \centering % using p{xxx} here to wrap long text instead of overflowing it \begin{longtable}{ | p{4cm} || p{3cm} | p{3cm} | p{3cm} | p{3cm} | p{3cm} |} \hline & % these are table headings. the \center is causing a ton of whitespace as well \begin{center} \textbf{HTC HD2} \end{center} & \begin{center} \textbf{Motorola Milestone} \end{center} & \begin{center} \textbf{Nokia N900} \end{center} & \begin{center} \textbf{RIM Blackberry Bold 9700} \end{center} & \begin{center} \textbf{Apple iPhone 3GS} \end{center} \\ \hline \hline % using flushleft here to left-justify, but again it is causing a ton of white space above and below cell contents. \begin{flushleft}OS / Platform \end{flushleft}& \begin{flushleft}Windows Mobile 6.5 \end{flushleft}& \begin{flushleft}Google Android 2.1 \end{flushleft}& \begin{flushleft}Maemo \end{flushleft}& \begin{flushleft}Blackberry OS 5.0 \end{flushleft}& \begin{flushleft}iPhone OS 3.1 \end{flushleft} \\ \hline

    Read the article

  • Recognize Dates In A String

    - by Tim Scott
    I want a class something like this: public interface IDateRecognizer { DateTime[] Recognize(string s); } The dates might exist anywhere in the string and might be any format. For now, I could limit to U.S. culture formats. The dates would not be delimited in any way. They might have arbitrary amounts of whitespace between parts of the date. The ideas I have are: ANTLR Regex Hand rolled I have never used ANTLR, so I would be learning from scratch. I wonder if there are libraries or code samples out there that do something similar that could jump start me. Is ANTLR too heavy for such a narrow use? I have used Regex a lot before, but I hate it for all the reasons that most people hate it. I could certainly hand roll it but I'd rather not re-solve a solved problem. Suggestions? UPDATE: Here is an example. Given this input: This is a date 11/3/63. Here is another one: November 03, 1963; and another one Nov 03, 63 and some more (11/03/1963). The dates could be in any U.S. format. They might have dashes like 11-2-1963 or weird extra whitespace inside like this: Nov   3,   1963, and even maybe the comma is missing like [Nov 3 63] but that's an edge case. The output should be an array of seven DateTimes. Each date would be the same: 11/03/1963 00:00:00.

    Read the article

  • Code Golf: Rotating Maze

    - by trinithis
    Code Golf: Rotating Maze Make a program that takes in a file consisting of a maze. The maze has walls given by '#'. The maze must include a single ball, given by a 'o' and any number of holes given by a '@'. The maze file can either be entered via command line or read in as a line through standard input. Please specify which in your solution. Your program then does the following: 1: If the ball is not directly above a wall, drop it down to the nearest wall. 2: If the ball passes through a hole during step 1, remove the ball. 3: Display the maze. 4: If there is no ball in the maze, exit. 5: Read a line from the standard input. Given a 1, rotate the maze counterclockwise. Given a 2, rotate the maze clockwise. Rotations are done by 90 degrees. It is up to you to decide if extraneous whitespace is allowed. If the user enters other inputs, repeat this step. 6: Goto step 1. You may assume all input mazes are closed. Note, a hole effectively acts as a wall in this regard. You may assume all input mazes have no extraneous whitespace. The shortest source code by character count wins. Example mazes: ###### #o @# ###### ########### #o # # ####### # ###@ # ######### ########################### # # # # @ # # # # ## # # ####o#### # # # # # # ######### # @ ######################

    Read the article

  • How do I ensure that a regex does not match an empty string?

    - by Dancrumb
    I'm using the Jison parser generator for Javascript and am having problems with my language specification. The program I'm writing will be a calculator that can handle feet, inches and sixteenths. In order to do this, I have the following specification: %% ([0-9]+\s*"'")?\s*([0-9]+\s*"\"")?\s*([0-9]+\s*"s")? {return 'FIS';} [0-9]+("."[0-9]+)?\b {return 'NUMBER';} \s+ {/* skip whitespace */} "*" {return '*';} "/" {return '/';} "-" {return '-';} "+" {return '+';} "(" {return '(';} ")" {return ')';} <<EOF>> {return 'EOF';} Most of these lines come from a basic calculator specification. I simply added the first line. The regex correctly matches feet, inch, sixteenths, such as 6'4" (six feet, 4 inches) or 4"5s (4 inches, 5 sixteenths) with any kind of whitespace between the numbers and indicators. The problem is that the regex also matches a null string. As a result, the lexical analysis always records a FIS at the start of the line and then the parsing fails. Here is my question: is there a way to modify this regex to guarantee that it will only match a non-zero length string?

    Read the article

  • Trailing comments after variable assignment subvert comparison

    - by nobar
    In GNU make, trailing comments appended to variable assignments prevent subsequent comparison (via ifeq) from working correctly. Here's the Makefile... A = a B = b ## trailing comment C = c RESULT := ifeq "$(A)" "a" RESULT += a endif ifeq "$(B)" "b" RESULT += b endif ifeq "$(C)" "c" RESULT += c endif rule: @echo RESULT=\"$(RESULT)\" @echo A=\"$(A)\" @echo B=\"$(B)\" @echo C=\"$(C)\" Here's the output... $ make RESULT=" a c" A="a" B="b " C="c" As you can see from the displayed value of RESULT, the ifeq was affected by the presence of the comment in the assignment of B. Echoing the variable B, shows that the problem is not the comment, but the intervening space. The obvious solution is to explicitly strip the whitespace prior to comparison like so... ifeq "$(strip $(B))" "b" RESULT += b endif However this seems error prone. Since the strip operation is not needed unless/until a comment is used, you can leave out the strip and everything will initially work just fine -- so chances are you won't always remember to add the strip. Later, if someone adds a comment when setting the variable, the Makefile no longer works as expected. Note: There is a closely related issue, as demonstrated in this question, that trailing whitespace can break string compares even if there is no comment. Question: Is there a more fool-proof way to deal with this issue?

    Read the article

  • PowerPoint: print slides - enlarge their size

    - by Franz
    In PowerPoint (2007), it is possible to print multiple slides on a paper as a handout. However, there is quite a lot of whitespace in between them and around them. Is there another way to enlarge the slides (I'm using the "6 per page" option)? And I don't mean "Adjust to paper size" (or whatever that's called in English), because the difference is just minimal.

    Read the article

  • How to ignore an autocmd in vim's undo history?

    - by Dave Vogt
    I have the following autocommand, which basically strips whitespace at the end of each line. Unfortunately, at each save, it inserts a step into the undo to jump to the beginning to the file, which is quite annoying. Is there a way to make vim ignore jumping around in the following command, so that undoing keeps the cursor in position? autocmd BufWritePre * \ let s:bufwritepre_currline = line('.') | \ let s:bufwritepre_currcol = col('.') | \ silent %s/\s*$// | \ call cursor(s:bufwritepre_currline, s:bufwritepre_currcol)

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >