Search Results

Search found 5433 results on 218 pages for 'escaped characters'.

Page 184/218 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • PDF files created on iPad dont display correctly on Windows

    - by user286028
    My iPhone app creates PDF files (in Arial font). The plain iPhone 3.1.x version works great (other than the known issue that PDFs created on the iPhone cant be viewed correctly in Google Docs or the BlackBerry). As I am updating my project for OS 3.2 and the iPad, it works just the same, and the PDFs still look great on the iPhone, iPad and MacOS (Preview app). But now on Windows (Vista), Acrobat 9.3.1 says "Cannot extract the embedded font 'XYZABC+ArialMT'. Some characters may not display or print correctly". And in fact Acrobat then uses some generic font instead of Arial (or whatever other font I try). Quartz 3.2 seems to generate these "random" embedded font names each time it creates a PDF (the XYZABC changes around each time). I can't tell whether the problem is just the somewhat strange "temporary" embedded font name with the plus sign, or the way Quartz 3.2 is embedding fonts. I have tried my existing code (using CGPDFContext* funtions), and also the newly supported UIGraphics* functions, with the same results. Has anyone else tried creating PDFs on the iPad yet and gotten them to display correctly on Windows?

    Read the article

  • doublechecking: no db-wide 'unicode switch' for sql server in the foreseeable future, i.e. like Orac

    - by user72150
    Hi all, I believe I know the answer to this question, but wanted to confirm: Question Does Sql server (or will it in the foreseeable future), offer a database-wide "unicode switch" which says "store all characters in unicode (UTF-16, UCS-2, etc)", i.e. like Oracle. The Context Our application has provided "CJK" (Chinese-Japanese-Korean) support for years--using Oracle as the db store. Recently folks have been asking for the same support in sql server. We store our db schema definition in xml and generate the vendor-specific definitions (oracle, sql server) using vendor-specific xsl. We can make the change easily. The problem is for upgrades. Generated scripts would need to change the column types for 100+ columns from varchar to nvarchar, varchar(max) to nvarchar(max), etc. These changes require dropping and recreating indexes and foreign keys if the any indexes/fk's exist on the column. Non-trivial. Risky. DB-wide character encodings for us would eliminate programming changes. (I.e. we would not to change the column types from varchar to nvarchar; sql server would correctly store unicode data in varchar columns). I had thought that eventually sql server would "see the light" and allow storing unicode in varchar/clob columns. Evidently not yet. Recap So just to triple check: does mssql offer a database-wide switch for character encoding? Will it in SQL2008R3? or 2010? thanks, bill

    Read the article

  • DotNetOpenAuth OpenID on ISA 2006 Reverse Proxy problem

    - by userb00
    I am trying to host my site that uses DotNetOpenAuth (OpenID) behind ISA 2006 (reverse proxy), and after it authenticated with a provider (such as Google), and it returns with a URL with %253A in the URL. However, ISA HTTP filter rejects the request. What I need to do is, on ISA web publishing rule, right click config HTTP policy properties uncheck "Verify Normalization" and it worked. Is this a problem on ISA 2006 generally? Are other firewalls having similar problems? Or, is it an OpenID or DotNetOpenAuth issue? Is it safe to disable Normalization checking on ISA? According to MSDN, quote "Web servers receive requests that are URL encoded. This means that certain characters may be replaced with a percent sign (%) followed by a particular number. For example, %20 corresponds to a space, so a request for http://myserver/My%20Dir/My%20File.htm is the same as a request for http://myserver/My Dir/My File.htm. Normalization is the process of decoding URL-encoded requests. Because the % can be URL encoded, an attacker can submit a carefully crafted request to a server that is basically double-encoded. If this occurs, Internet Information Services (IIS) may accept a request that it would otherwise reject as not valid. When you select Verify Normalization, the HTTP filter normalizes the URL two times. If the URL after the first normalization is different from the URL after the second normalization, the filter rejects the request. This prevents attacks that rely on double-encoded requests. Note that while we recommend that you use the Verify Normalization function, it may also block legitimate requests that contain a %."

    Read the article

  • Code for decoding/encoding a modified base64 URL

    - by Kirk Liemohn
    I want to base64 encode data to put it in a URL and then decode it within my HttpHandler. I have found that Base64 Encoding allows for a '/' character which will mess up my UriTemplate matching. Then I found that there is a concept of a "modified Base64 for URL" from wikipedia: A modified Base64 for URL variant exists, where no padding '=' will be used, and the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', so that using URL encoders/decoders is no longer necessary and has no impact on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general. Using .NET I want to modify my current code from doing basic base64 encoding and decoding to using the "modified base64 for URL" method. Has anyone done this? To decode, I know it starts out with something like: string base64EncodedText = base64UrlEncodedText.Replace('-', '+').Replace('_', '/'); // Append '=' char(s) if necessary - how best to do this? // My normal base64 decoding now uses encodedText But, I need to potentially add one or two '=' chars to the end which looks a little more complex. My encoding logic should be a little simpler: // Perform normal base64 encoding byte[] encodedBytes = Encoding.UTF8.GetBytes(unencodedText); string base64EncodedText = Convert.ToBase64String(encodedBytes); // Apply URL variant string base64UrlEncodedText = base64EncodedText.Replace("=", String.Empty).Replace('+', '-').Replace('/', '_'); I have seen the Guid to Base64 for URL StackOverflow entry, but that has a known length and therefore they can hardcode the number of equal signs needed at the end.

    Read the article

  • Problem in IE8 with GET Parameters in opening a new windows with javascript.

    - by amfa95
    Hi, I have a problem with IE8 and the opening of a new window with javascript and submitting parameters with special characters. <a href="javascript:oWin('/html/de/4664286/printregistrationcontent.html?12-security question&#61;Wie hei&#223;t Ihr Lieblingsrestaurant','PRINT',800,600);" class="print">Seite drucken</a> The Problem is the letter 'ß' (sharp S). As you can see the string above is encodes due to anti XSS. This link works in FF and IE6 but IE8 is transmitting the URL Parameter as character with code 65*** (don't know the exaxt value). In the opening window you will only see a square (because character with 65000+ is not printable). I also tried to use URL Encoding instead of HTML encoding <a href="javascript:oWin('/html/de/4664286/printregistrationcontent.html?12-security question%3DWie hei%C3%9Ft Ihr Lieblingsrestaurant','PRINT',800,600);" class="print">Seite drucken</a> If i click on this Link in FF or IE6 it works as expected, but IE8 will fail to transmit the "ß" to the server and therefor will also get it back in the wrong way. If i paste this url to the IE8 it will work too, but not if the window is opened by javascript. The Javascript function oWin is defined as follows function oWin(url,title,sizeH,sizeV) { winHandle = top.open(url,title,'toolbar=no,directories=no,status=yes,scrollbars=yes,menubar=no,resizable=no,width='+sizeH+',height='+sizeV); if(navigator.appVersion.indexOf("MSIE 3",0)==-1) id = setTimeout('winHandle.focus()',1000); } If someone has an idea where to look for the reason please answer to this. Thank you amfa

    Read the article

  • Secure hash and salt for PHP passwords

    - by luiscubal
    It is currently said that MD5 is partially unsafe. Taking this into consideration, I'd like to know which mechanism to use for password protection. Is “double hashing” a password less secure than just hashing it once? Suggests that hashing multiple times may be a good idea. How to implement password protection for individual files? Suggests using salt. I'm using PHP. I want a safe and fast password encryption system. Hashing a password a million times may be safer, but also slower. How to achieve a good balance between speed and safety? Also, I'd prefer the result to have a constant number of characters. The hashing mechanism must be available in PHP It must be safe It can use salt (in this case, are all salts equally good? Is there any way to generate good salts?) Also, should I store two fields in the database(one using MD5 and another one using SHA, for example)? Would it make it safer or unsafer? In case I wasn't clear enough, I want to know which hashing function(s) to use and how to pick a good salt in order to have a safe and fast password protection mechanism. EDIT: The website shouldn't contain anything too sensitive, but still I want it to be secure. EDIT2: Thank you all for your replies, I'm using hash("sha256",$salt.":".$password.":".$id) Questions that didn't help: What's the difference between SHA and MD5 in PHP Simple Password Encryption Secure methods of storing keys, passwords for asp.net How would you implement salted passwords in Tomcat 5.5

    Read the article

  • Pythonic mapping of an array (Beginner)

    - by scott_karana
    Hey StackOverflow, I've got a question related to a beginner Python snippet I've written to introduce myself to the language. It's an admittedly trivial early effort, but I'm still wondering how I could have written it more elegantly. The program outputs NATO phoenetic readable versions of an argument, such "H2O" - "Hotel 2 Oscar", or (lacking an argument) just outputs the whole alphabet. I mainly use it for calling in MAC addresses and IQNs, but it's useful for other phone support too. Here's the body of the relevant portion of the program: #!/usr/bin/env python import sys nato = { "a": 'Alfa', "b": 'Bravo', "c": 'Charlie', "d": 'Delta', "e": 'Echo', "f": 'Foxtrot', "g": 'Golf', "h": 'Hotel', "i": 'India', "j": 'Juliet', "k": 'Kilo', "l": 'Lima', "m": 'Mike', "n": 'November', "o": 'Oscar', "p": 'Papa', "q": 'Quebec', "r": 'Romeo', "s": 'Sierra', "t": 'Tango', "u": 'Uniform', "v": 'Victor', "w": 'Whiskey', "x": 'Xray', "y": 'Yankee', "z": 'Zulu', } if len(sys.argv) < 2: for n in nato.keys(): print nato[n] else: # if sys.argv[1] == "-i" # TODO for char in sys.argv[1].lower(): if char in nato: print nato[char], else: print char, As I mentioned, I just want to see suggestions for a more elegant way to code this. My first guess was to use a list comprehension along the lines of [nato[x] for x in sys.argv[1].lower() if x in nato], but that doesn't allow me to output any non-alphabetic characters. My next guess was to use map, but I couldn't format any lambdas that didn't suffer from the same corner case. Any suggestions? Maybe something with first-class functions? Messing with Array's guts? This seems like it could almost be a Code Golf question, but I feel like I'm just overthinking :)

    Read the article

  • Open an excel file using COM and save it as .xml file

    - by chupinette
    Hi. Im trying the following code: <?php $workbook = "D:\b2\\test.XLS"; $sheet = "Sheet1"; #Instantiate the spreadsheet component. $ex = new COM("Excel.sheet") or Die ("Did not connect"); #Get the application name and version print "Application name:{$ex->Application->value}<BR>" ; print "Loaded version: {$ex->Application->version}<BR>"; #Open the workbook that we want to use. $wkb = $ex->application->Workbooks->Open($workbook) or Die ("Did not open"); #Create a copy of the workbook, so the original workbook will be preserved. $ex->Application->ActiveWorkbook->SaveAs("D:\b2\Ourtest.xml"); #$ex->Application->Visible = 1; #Uncomment to make Excel visible. #Optionally, save the modified workbook $ex->Application->ActiveWorkbook->SaveAs("D:\Ourtest.xml"); #Close all workbooks without questioning $ex->application->ActiveWorkbook->Close("False"); unset ($ex); ?> This actually works and creates the Ourtest.xml file. But im getting characters like: ÐÏࡱá þÿ þÿÿÿ I have tried with SaveAs("D:\Ourtest.pdf") and it says the file has been corrupted or incorrectly decoded. Can anyone help me please?Thanks

    Read the article

  • Preg Expression to identify classes/ids in a CSS file that have no contents

    - by dclowd9901
    I'm in the process of updating some old CSS files in our systems, and we have a bunch that have lots of empty classes simply taking up space in the file. I'd love to learn how to write Regular expressions, but I just don't get them. I'm hoping the more I expose myself to them (with a little more cohesive explanation), the more I'll end up understanding them. The Problem That said, I'm looking for an expression that will identify text followed by a '{' (some have spaces in between, and some do not) and if there are no letters or numbers between that bracket and '}' (spaces don't count), it will be identified as a matching string. I suppose I can trim the whitespace out of the doc before I run a regular expression through it, but I don't want to change the basic structure of the text. I'm hoping to return it into a large <textarea>. Bonus points for explaining the characters and their meanings, and also an expression identifying lines in the copy without any text or numbers, as well. I will likely use the final expression in PHP script. tl;dr: Regular Expression to match: .a_class_or #an_id { /* if there aren't any alphanumerics in here, this should be a matching line of text */ }

    Read the article

  • C# FileStream position is off after calling ReadLine()

    - by Cristi Diaconescu
    I'm trying to read a (small-ish) file in chunks of a few lines at a time, and I need to return to the beginning of particular chunks. The problem is, after the very first call to streamReader.ReadLine(); the streamReader.BaseStream.Position property is set to the end of the file! Now I assume some caching is done in the backstage, but I was expecting this property to reflect the number of bytes that I used from that file. For instance, calling ReadLine() again will (naturally) return the next line in the file, which does not start at the position previously reported by streamReader.BaseStream.Position. My question is, how can I find the actual position where the 1st line ends, so I can return there later? I can only think of manually doing the bookkeeping, by adding the lengths of the strings returned by ReadLine(), but even here there are a couple of caveats: ReadLine() strips the new-line character(s) which may have a variable length (is is '\n' ? is it "\r\n" ? etc) I'm not sure if this would work ok with variable-length characters ...so right now it seems like my only option is to rethink how I parse the file, so I don't have to rewind. If it helps, I open my file like this: using (var reader = new StreamReader( new FileStream(m_path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))) {...} Any suggestions?

    Read the article

  • Displaying ppt, doc, and xls in UIWebView doesn't work but pdf does

    - by slugolicious
    It looks like a few people on stackoverflow get this to work but their code isn't posted. I'm using [web loadData:data MIMEType:MIMEType textEncodingName:@"UTF-8" baseURL:nil]; where MIMEType is: @"application/vnd.ms-powerpoint" @"application/vnd.ms-word" @"application/vnd.ms-excel" (BTW, I've seen DOC files use mimetype @"application/msword" but the "vnd" version seems more appropriate. I tried both just in case.) I verified that my 'data' is correct. PDF and TXT files work. When the UIWebView displays PPT, DOC, or XLS files, it's blank. I put NSLOG statements in my UIWebViewDelegate calls. shouldStartLoadWithRequest:<NSMutableURLRequest about:blank> navType:5 webViewDidStartLoad: didFailLoadWithError:Error Domain=NSURLErrorDomain Code=100 UserInfo=0x122503a0 "Operation could not be completed. (NSURLErrorDomain error 100.)" didFailLoadWithError:Error Domain=WebKitErrorDomain Code=102 UserInfo=0x12253840 "Frame load interrupted" so obviously the load is failing, but why? If I change my mimetype to @"text/plain" for a PPT file, the UIWebView loads fine and displays unprintable characters, as expected. That's telling me the 'data' passed to loadData: is ok. Meaning my mimetypes are bad? And just to make sure my PPT, DOC, and XLS files are indeed ok to display, I created a simple html file with anchor tags to the files. When the html file is displayed in Safari on the iPhone, clicking on the files displays correctly in Safari. I tried to research the error code displayed in didFailLoadWithError (100) but all the documented error codes are negative and greater than 1000 (as seen in NSURLError.h). -(void)webView:(UIWebView *)webView didFailLoadWithError:(NSError *)error { NSLog(@"didFailLoadWithError:%@", error); }

    Read the article

  • Using YQL multi-query & XPath to parse HTML, how to escape nested quotes?

    - by Tivac
    The title is more complicated than it has to be, here's the problem query. SELECT * FROM query.multi WHERE queries=" SELECT * FROM html WHERE url='http://www.stumbleupon.com/url/http://www.guildwars2.com' AND xpath='//li[@class=\"listLi\"]/div[@class=\"views\"]/a/span'; SELECT * FROM xml WHERE url='http://services.digg.com/1.0/endpoint?method=story.getAll&link=http://www.guildwars2.com'; SELECT * FROM json WHERE url='http://api.tweetmeme.com/url_info.json?url=http://www.guildwars2.com'; SELECT * FROM xml WHERE url='http://api.facebook.com/restserver.php?method=links.getStats&urls=http://www.guildwars2.com'; SELECT * FROM json WHERE url='http://www.reddit.com/button_info.json?url=http://www.guildwars2.com'" Specifically this line, xpath='//li[@class=\"listLi\"]/div[@class=\"views\"]/a/span' It's problematic because of the quoting, I have to nest them three levels deep and I've run out of quote characters to use. I've tried the following variations without success: //no attribute quoting xpath='//li[@class=listLi]/div[@class=views]/a/span' //try to quote attribute w/ backslash & single quote xpath='//li[@class=\'listLi\']/div[@class=\'views\']/a/span' //try to quote attribute w/ backslash & double quote xpath='//li[@class=\"listLi\"]/div[@class=\"views\"]/a/span' //try to quote attribute with double single quotes, like SQL xpath='//li[@class=''listLi'']/div[@class=''views'']/a/span' //try to quote attribute with double double quotes, like SQL xpath='//li[@class=""listLi""]/div[@class=""views""]/a/span' //try to quote attribute with quote entities xpath='//li[@class=&quot;listLi&quot;]/div[@class=&quot;views&quot;]/a/span' //try to surround XPath with backslash & double quote xpath=\"//li[@class='listLi']/div[@class='views']/a/span\" //try to surround XPath with double double quote xpath=""//li[@class='listLi']/div[@class='views']/a/span"" All without success. I don't see much out there about escaping XPath strings but everything I've found seems to be variations on using concat (which won't help because neither ' nor " are available) or html entities. Not using quotes for the attributes doesn't throw an error but fails because it's not the actual XPath string I need. I don't see anything in the YQL docs about how to handle escaping. I'm aware of how edge-casey this is but was hoping they'd have some sort of escaping guide.

    Read the article

  • SQL Server 2008 - Editing Tables: Bit columns require 'True' or 'False'

    - by CJM
    Not so much a question as an observation... I'm just upgrading to SQL Server 2008 on my development machine in anticipation of upgrading my live applications. I didn't anticipate any problems since [I think] I generally use standard T-SQL, and probably not too far from ANSI standard SQL. So far so good, but I was really thrown by a very simple change: I was creating a simple, small look-up table to store a list of codes and including a bit column to indicate the current default code. But when I used the new/modified 'Edit Top 200 Rows' option, and entered my 0s and 1s in the the bit column I got an error: 'Invalid value for cell - String was not recognised as a valid boolean' After a bit of head-scratching, I tried True and False - and they worked. So it seems this new Edit feature requires 4 or 5 characters to be typed, rather than the previous 1. Checking further, we can still use '...where bitval = 1' but can now also use '...where bitval = 'true''. But any results returned render these bit columns as 0 or 1 still. It all sounds like half a step backwards. Not the end of the world, but and unnecessary annoyance. Does anybody have any insight on this issue? Or there any other new Gotchas with SQL Server 2008?

    Read the article

  • Is jQuery.parseJSON able to process all valid json?

    - by murze
    This piece of valid json (it has been generated using php's json_encode): {"html":"form is NOT valid<form id=\"articleform\" enctype=\"application\/x-www-form-urlencoded\" method=\"post\" action=\"\"><dl class=\"zend_form\">\n<dt id=\"title-label\">&nbsp;<\/dt>\n<dd id=\"title-element\">\n<input type=\"text\" name=\"title\" id=\"title\" value=\"Artikel K\"><\/dd>\n<dt id=\"articleFormSubmitted-label\">&nbsp;<\/dt>\n<dd id=\"articleFormSubmitted-element\">\n<input type=\"hidden\" name=\"articleFormSubmitted\" value=\"1\" id=\"articleFormSubmitted\"><\/dd>\n<dt id=\"submit-label\">&nbsp;<\/dt><dd id=\"submit-element\">\n<input type=\"submit\" name=\"submit\" id=\"submit\" value=\"Bewaar artikel\" onclick=\"this.value='Bezig...';\"><\/dd><\/dl><\/form><script type=\"text\/javascript\">\n\t $(\"#articleform\").submit(function(){\n $.post(\"\/admin\/ajax\/contenttree\/node\/9\/ajaxtarget\/ajaxContainer\", $(\"#articleform\").serialize(), function(html){$(\"#ajaxContainer\").html(html);} );\n\t\t return false;\n\t });\n\n <\/script>","newNodeName":""} is giving jQuery.parseJSON(data) and me a hard time. With this piece of code: alert('start'); alert(data); jQuery.parseJSON(data) alert('stop'); I get a message start and then the data (jsonstring above) is shown. The message "stop" never appears. When I use this json: {"html":"test","newNodeName":""} I've verified that my first big chick of json is valid. Why isn't it processed by jQuery.parseJSON Are there any special characters that don't go well with json?

    Read the article

  • XSLT: how to ignore unnecessary white space?

    - by arnaud
    Hi, Given this example XML file: <doc> <tag> Hello ! </tag> <tag> My name is John </tag> </doc> And the following XSLT sheet: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <xsl:for-each select="doc/tag"> <xsl:value-of select="."/> </xsl:for-each> </xsl:template> </xsl:stylesheet> How should I change it in order to ignore line feeds and convert any group of white-space characters to just one space in the items? In other words, I would like to obtain: Hello! My name is John Without all those those silly line feeds. ...the question is how. Thanks in advance !

    Read the article

  • shortest digest of a string

    - by meta
    [Description] Given a string of char type, find a shortest digest, which is defined as: a shortest sub-string which contains all the characters in the original string. [Example] A = "aaabedacd" B = "bedac" is the answer. [My solution] Define an integer table with 256 elements, which is used to record the occurring times for each kind of character in the current sub-string. Scan the whole string, statistic the total kinds of character in the given string by using the above table. Use two pointers, start, end, which are initially pointing to the start and (start + 1) of the given string. The current kinds of character is 1. Expand sub-string[start, end) at the end until it contains all kinds of character. Update the shortest digest if possible. Contract sub-string[start, end] at the start by one character each time, try to restore its digest property if necessary by step 4. The time cost is O(n), and the extra space cost is constant. Any better solution without extra space?

    Read the article

  • Parsing Lisp S-Expressions with known schema in C#

    - by Drew Noakes
    I'm working with a service that provides data as a Lisp-like S-Expression string. This data is arriving thick and fast, and I want to churn through it as quickly as possible, ideally directly on the byte stream (it's only single-byte characters) without any backtracking. These strings can be quite lengthy and I don't want the GC churn of allocating a string for the whole message. My current implementation uses CoCo/R with a grammar, but it has a few problems. Due to the backtracking, it assigns the whole stream to a string. It's also a bit fiddly for users of my code to change if they have to. I'd rather have a pure C# solution. CoCo/R also does not allow for the reuse of parser/scanner objects, so I have to recreate them for each message. Conceptually the data stream can be thought of as a sequence of S-Expressions: (item 1 apple)(item 2 banana)(item 3 chainsaw) Parsing this sequence would create three objects. The type of each object can be determined by the first value in the list, in the above case "item". The schema/grammar of the incoming stream is well known. Before I start coding I'd like to know if there are libraries out there that do this already. I'm sure I'm not the first person to have this problem.

    Read the article

  • Perl strings internals

    - by n0rd
    How does perl strings represented internally? What encoding is used? How do I handle different encodings properly? I've been using perl for quite a long time, but it didn't include a lot of string handling in different encodings, and when I encountered a minor problem that had something to do with encodings I usually resorted to some shamanic actions. Until this moment I thought about perl strings as sequences of bytes, which did fit pretty well for my tasks. Now I need to do some processing of UTF-8 encoded file and here starts trouble. First, I read file into string like this: open(my $in, '<', $ARGV[0]) or die "cannot open file $ARGV[0] for reading"; binmode($in, ':utf8'); my $contents; { local $/; $contents = <$in>; } close($in); then simply print it: print $contents; And I get two things: a warning Wide character in print at <scriptname> line <n> and a garbage in console. So I can conclude that perl strings have a concept of "character" that can be "wide" or not, but when printed these "wide" characters are represented in console as multiple bytes, not as single "character". (I wonder now why did all my previous experience with binary files worked quite how I expected it to work without any "character" issues). Why then I see garbage in console? If perl stores strings as character in some known encoding, I don't think there is a big problem to find out console encoding and print text properly. (I use Windows, BTW). If perl stores strings as multibyte sequences (e.g. using same UTF-8 encoding), why is it done this way? From my C experience handling multibyte strings is PAIN.

    Read the article

  • Code Golf: Connect 4

    - by Matthieu M.
    If you don't know the Connect 4 game, follow the link :) I used to play it a lot when I was a child. At least until my little sister got bored with me winning... Anyway I was reading the Code Golf: Tic Tac Toe the other day and I thought that solving the Tic Tac Toe problem was simpler than solving the Connect 4... and wondered how much this would reflect on the number of characters a solution would yield. I thus propose a similar challenge: Find the winner The grid is given under the form of a string meant to passed as a parameter to a function. The goal of the code golf is to write the body of the function, the parameter will be b, of string type The image in the wikipedia article leads to the following representation: "....... ..RY... ..YYYR. ..RRYY. ..RYRY. .YRRRYR" (6 rows of 7 elements) but is obviously incomplete (Yellow has not won yet) There is a winner in the grid passed, no need to do error checking Remember that it might not be exactly 4 The expected output is the letter representing the winner (either R or Y) I expect perl mongers to produce the most unreadable script (along with Ook and whitespace, of course), but I am most interested in reading innovative solutions. I must admit the magic square solution for Tic Tac Toe was my personal fav and I wonder if there is a way to build a similar one with this. Well, happy Easter weekend :) Now I just have a few days to come up with a solution of my own!

    Read the article

  • Regex: Use start of line/end of line signs (^ or $) in different context

    - by fgysin
    While doing some small regex task I came upon this problem. I have a string that is a list of tags that looks e.g like this: foo,bar,qux,garp,wobble,thud What I needed to do was to check if a certain tag, e.g. 'garp' was in this list. (What it finally matches is not really important, just if there is a match or not.) My first and a bit stupid try at this was to use the following regex: [^,]garp[,$] My idea was that before 'garp' there should either be the start of the line/string or a comma, after 'garp' there should be either a comma or the end of the line/string. Now, it is instantly obvious that this regex is wrong: Both ^ and $ change their behaviour in the context of the character class [ ]. What I finally came up with is the following: ^garp$|^garp,|,garp,|,garp$ This regex just handles the 4 cases one by one. (Tag at beginning of list, in the center, at the end, or as the only element of the list.) The last regex is somehow a bit ugly in my eyes and just for funs sake I'd like to make it a bit more elegant. Is there a way how the start of line/end of line characters (^ and $) can be used in the context of character classes?

    Read the article

  • How to escape the character entities in XML?

    - by Chetan Vaity
    I want to pass XML as a string in an XML attribute. <activity evt="&lt;FHS&gt; &lt;act&gt; &lt;polyline penWidth=&quot;2&quot; points=&quot;256,435 257,432 &quot;/&gt; &lt;/act&gt; &lt;/FHS&gt; /> Here the "evt" attribute is the XML string, so escaping all the less-than, greater-than, etc characters by the appropriate character entities works fine. The problem is I want a fragment to be interpreted as is - the character entities themselves should be treated as simple strings. When the "evt" attribute is read and an XML is generated from it, it should look like <FHS> <act> &lt;polyline penWidth=&quot;2&quot; points=&quot;256,435 257,432 &quot;/&gt; </act> </FHS> Essentially, I want to escape the character entities. How is this possible?

    Read the article

  • How does MatchEvaluator works? ( C# regex replace)

    - by Marin Doric
    This is the input string 23x * y34x2. I want to insert " * " (star surrounded by whitespaces) after every number followed by letter, and after every letter followed by number. So my input string would look like this: 23 * x * y * 34 * x * 2. This is the regex that does the job: @"\d(?=[a-z])|a-z". This is the function that I wrote that inserts the " * ". Regex reg = new Regex(@"\d(?=[a-z])|[a-z](?=\d)"); MatchCollection matchC; matchC = reg.Matches(input); int ii = 1; foreach (Match element in matchC)//foreach match I will find the index of that match { input = input.Insert(element.Index + ii, " * ");//since I' am inserting " * " ( 3 characters ) ii += 3; //I must increment index by 3 } return input; //return modified input My question how to do same job using .net MatchEvaluator? I'am new to regex and don't understand good replacing with MatchEvaluator. This is the code that I tried to wrote: Regex reg = new Regex(@"\d(?=[a-z])|[a-z](?=\d)"); MatchEvaluator matchEval = new MatchEvaluator(ReplaceStar); input = reg.Replace(input, matchEval); return input; } public string ReplaceStar( Match match ) { //return What?? }

    Read the article

  • Need a regular expression to parse a text body

    - by Ali
    Hi guys, I need a regular expression to parse a body of text. Basically assume this that we have text files and each of which contains random text but within the text there would be lines in the following formats - basically they are a format for denoting flight legs. eg: 13FEB2009 BDR7402 1000 UUBB 1020 UUWW FLT This line of text is always on one line The first word is a date in the format DDMMMYYYY Second word could be of any length and hold alphanumeric characters third word is the time in format HHMM - its always numeric fourth word is a location code - its almost always just alphabets but could also be alphanumeric fifth word is the arrival time in format HHMM - its always numeric sixth word is a location code - its almost always just alphabets but could also be alphanumeric Any words which follow on the same line are just definitions A text file may contain among lots of random text information one or more such lines of text. I need a way to be able to extract all this information i.e just these lines within a text file and store them with their integral parts separated as mentioned in an associative array so I have something like this: array('0'=>array('date'=>'', 'time-dept'=>'', 'flightcode'=>'',....)) I'm assuming regular expressions would be in order here. I'm using php for this - would appreciate the help guys :)

    Read the article

  • Multi-base conversion - using all combinations for URL shortener

    - by Guffa
    I am making an URL shortener, and I am struggling with the optimal way of encoding a number (id) into a character string. I am using the characters 0-9,A-Z,a-z so it will basically be a base-62 encoding. That is pretty basic, but it doesn't make use of all possible codes. The codes that it would produce would be: 0, 1, ... y, z, 10, 11, ... zy, zz, 100, 101, ... Notice that the codes 00 to 0z is not used, the same for 000 to 0zz, and so on. I would like to use all the codes, like this: 0, 1, ... y, z, 00, 01, ... zy, zz, 000, 001, ... It would be some combination of base-62 and base-63, with different bases depending on the position... Using base-62 is easy, for example: create procedure tiny_GetCode @UrlId int as set nocount on declare @Code varchar(10) set @Code = '' while (@UrlId > 0 or len(@Code) = 0) begin set @Code = substring('0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', @UrlId % 62 + 1, 1) + @Code set @UrlId = @UrlId / 62 end select @Code But I haven't yet managed to make a multi-base conversion out of it, to make use of all the codes.

    Read the article

  • What's wrong with this regex (VBScript/Javascript flavor)

    - by OtherMichael
    I'm trying to run a regular expression in VBA code that uses Microsoft VBScript Regular Expressions 5.5 (should be the same as JavaScript regex) regex: ^[0-9A-Z]?[0-9A-Z]{3}[A-Z]?([0-9A-Z]{6})-?([0-9])?$ input: X123A1234567 match: 123456 the six characters I'm interested in give a good match of 123456, ignoring the last (check) digit. Perfect. (The check digit is captured, but it's not a major concern to me). But when BOTH the optional portions are gone (they are optional) the match grabs the last digit GOOD input: 123A1234567 match: 123456 Leave in the optional middle alpha, take out the optional leading alpha, and we still get the good match of 123456 GOOD input: X1231234567 match: 123456 Leave in the optional leading alpha, take out the middle optional alpha, and we still get a good match of 123456 BAD input: 1231234567 match: 234567 Take out BOTH optional alphas, and we get a bad match of 234567 Have a looksee @ the regex testers on http://www.regular-expressions.info/javascriptexample.html or http://www.regular-expressions.info/vbscriptexample.html What am I missing, here? How can I get the regex to ignore the last digit when both optional alphas are missing? The regex is used to feed a lookup system, so that no matter what format the input data, we can match to a complete value.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >