Search Results

Search found 5222 results on 209 pages for 'characters'.

Page 9/209 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Unicode characters and IE

    - by findmeahamper
    I just built a site that relies on certain Unicode characters like &#9398;, but have just realized that IE doesn't show these characters? Is there some meta tag to get the browser to show it or how do you update IE to handle these Unicode characters?

    Read the article

  • How to insert arabic characters into sql database?

    - by Pavan Reddy
    How can I insert arabic characters into sql database? I tried to insert arabic data into a table and the arabic characters in the insert script were inserted as '??????' in the table. I tried to directly paste the data into the table through sql management studio and the arabic characters was successfully and accurately inserted. I looked around for resolutions for this problems and some threads suggested changing the datatype to nvarchar instead of varchar. I tried this as well but without any luck. How can we insert arabic characters into sql database?

    Read the article

  • JavaScript automatically converts some special characters

    - by noplacetoh1de
    I need to extract a HTML-Substring with JS which is position dependent. I store special characters HTML-encoded. For example: HTML <div id="test"><p>l&ouml;sen &amp; gr&uuml;&szlig;en</p></div>? Text lösen & grüßen My problem lies in the JS-part, for example when I try to extract the fragment lö, which has the HTML-dependent starting position of 3 and the end position of 9 inside the <div> block. JS seems to convert some special characters internally so that the count from 3 to 9 is wrongly interpreted as "lösen " and not "l&ouml;". Other special characters like the &amp; are not affected by this. So my question is, if someone knows why JS is behaving in that way? Characters like &auml; or &ouml; are being converted while characters like &amp; or &nbsp; are plain. Is there any possibility to avoid this conversion? I've set up a fiddle to demonstrate this: JSFiddle Thanks for any help! EDIT: Maybe I've explained it a bit confusing, sorry for that. What I want is the HTML: <p>l&ouml;sen &amp; gr&uuml;&szlig;en</p> . Every special character should be unconverted, except the HTML-Tags. Like in the HTML above. But JS converts the &ouml; or &uuml; into ö or ü automatically, what I need to avoid.

    Read the article

  • In the JSON spec, what does "Since the first two characters of a JSON text will always be ASCII characters" mean?

    - by dan gibson
    The spec is http://www.ietf.org/rfc/rfc4627.txt?number=4627 It contains this: Encoding JSON text SHALL be encoded in Unicode. The default encoding is UTF-8. Since the first two characters of a JSON text will always be ASCII characters [RFC0020], it is possible to determine whether an octet stream is UTF-8, UTF-16 (BE or LE), or UTF-32 (BE or LE) by looking at the pattern of nulls in the first four octets. What does it mean "Since the first two characters of a JSON text will always be ASCII characters [RFC0020]"? I've looked at RFC0020 but couldn't find anything about it. JSON could be {" or { " (ie whitespace before the quote.

    Read the article

  • Replacing special characters by null

    - by madheena
    Hi, Is there any function to replace the special characters by null in informatica if we used replacestr function, i think we should specify all special characters as follows replacestr(input,'!','~','@','#','$','%','^','&','*',null) But we dont know what are teh special characters will be coming as input. can u please let me know that which function will be suitable.

    Read the article

  • Regex: How to leave out webding font characters?

    - by DS
    Hi, I've a free text field on my form where the users can type in anything. Some users are pasting text into this field from Word documents with some weird characters that I don't want to go in my DB. (e.g. webding font characters) I'm trying to get a regular expression that would give me only the alphanum and the punctuation characters. But when I try the following, the output is still all the characters. How can I leave them out? <html><body><script type="text/javascript">var str="???????";document.write(str.replace(/[^a-zA-Z 0-9 [:punct]]+/g, " "));</script></body></html>

    Read the article

  • Replacing characters in Ruby string according to some rule

    - by Kyle Kaitan
    In Ruby, I have a string of identical characters -- let's say they're all exclamation points, as in !!!!. I would like to replace the characters at certain indices with '*' if the integer corresponding to that index meets some criteria. For example, let's say I want to replace all the characters whose indices are even numbers and are greater than 3. In the string !!!!!!!! (8 characters long), that results in !!!!*!*! (indices 4 and 6 have been replaced). What's the most compact way to do this?

    Read the article

  • Why do XSLT editors insert tab or space characters into XSLT to format it?

    - by pgfearo
    All XSLT editors I've tried till now add tab or space characters to the XSLT to indent it for formatting. This is done even in places within the XSLT where these characters are significant to the XSLT processor. XSLT modified for formatting in this way can produce output very different to that of the original XSLT if it had no formatting. To prevent this, xsl:text elements or other XSLT must be added to a sequence constructor to help separate formatting from content, this additional XSLT impacts on maintainability. Formatting characters also adversely impact on general usability of the tool in a number of ways (this is why word-processors don't use them I guess) and add to the size of the file. As part of a larger project I've had to develop a light-weight XSLT editor, it's designed to format XSLT properly, but without tab or space characters, just a dynamic left-margin for each new line. The XSLT therefore doesn't need additional elements to separate formatting tab or space characters from content. The problem with this is that if XSLT from this editor is opened in other XSLT editors, characters will be added for formatting reasons and the XSLT may therefore no longer behave as intended. Why then do existing XSLT editors use tabs or spaces for formatting in the first place? I feel there must be valid reasons, perhaps historical, perhaps practical. An answer will help me understand whether I need to put compatibility options in place in my XSLT editor somehow, whether I should simply revert to using tabs or spaces for both XSLT content and formatting (though this seems like a backwards step to me), or even whether enough XSLT users might be able to persuade their tools vendors to include alternative formatting methods to tabs or spaces. Note: I provided an XSLT sample demonstrating formatting differences in this answer to the question: Tabs versus spaces—what is the proper indentation character for everything, in every situation, ever?

    Read the article

  • Error while zipping files with unicode characters in names with Win7's "send to > compressed (zipped) folder"

    - by user1306322
    When I try to zip files containing unicode characters in their names, such as © or ™, I get the following error: [Window Title] Compressed (zipped) Folders Error [Content] 'C:\Asd™.txt' cannot be compressed because it includes characters that cannot be used in a compressed folder, such as ™. You should rename this file or directory. [OK] This only became a problem when I reinstalled Windows 7. I probably had some resources necessary from this error to be resolved automatically, but it's almost clean installation now and I can't zip files. How do I fix this? UPD: Some time passed since I posted this question, I installed some of my usual applications, but the problem still exists and I'm not sure if it can be fixed by installing some specific application from before.

    Read the article

  • How to read a Text File Hidden Characters?

    - by balexandre
    Hi guys, I've created a text file from an application that I developed. When I send the text file to a SYSTEM Validation, they (3rd Party System) say that the file is invalid and that the file contains 3 characters in the beginning of the file that are not allowed as well special characters are not correct. They also say I need to use either ISO 8859-1 or PC850 Well, I'm using NotePad++ and I can't see that at all! What is the best text file reader for this kinda problems? EDITED I also have a MAC and just a thought I remembered opening in TextMate ... WOW! Now I know what they are talking about! How can I have the same in Windows?

    Read the article

  • Non-printing characters in Word 2011 not showing even when enabled

    - by Henrik Söderlund
    I have a document I work on often, my resume. I have created a few different styles that I use and for some reason the non-printing characters have stopped showing properly. I have the option enabled (the reversed P) and the proper settings in the preferences checked. Here is a screenshot of the current view: basically, only the tab stops and the returns are showing. Upon doing an experiment by creating a new document, all characters (especially the spaces) show up nicely: I can copy this line and paste it into my resume document and it shows up there too. It seems my styles are doing something...

    Read the article

  • nginx regex characters that require quoting?

    - by Michael Louis Thaler
    So I was configuring nginx today and I hit a weird problem. I was trying to match a location like this: location ~ ^/([0-9]+)/(.*) { # do proxy redirects } ...for URLs like "http://my.domain.com/0001/index.html". This rule was never matching, despite the fact that it by all rights should. It took me awhile to figure out, based on this documentation, that some characters in regexes need to be quoted. The problem is, the documentation is for rewrites, and it specifically calls out curly braces, not square brackets. After a fair bit of experimentation that involved a lot of swearing, I discovered that I could fix the problem by quoting the regex like so: location ~ "^/([0-9]+)/(.*)" { # do proxy redirects } Is there a list somewhere of characters that nginx requires quoting regexes with? Or could there be something else going on here that I'm totally missing? This is my first nginx configuration job, so it's very possible I've misunderstood something...

    Read the article

  • max length of url 257 characters for mod_rewrite?

    - by Daniel
    My url scheme is /foo/var1-var2-var3.../bar I am using these mod_rewrite rules: RewriteBase /foo/ RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^ index.php [PT,L] If the length of the string 'var1-var2...' is greater than 257 characters then an error 403 Forbidden and a 404 are returned. However, if the length of the 'var1-var2...' string is 257 characters or less and subsequently followed by a slash the length of the remaining url may be any length. How does one overcome this limit?

    Read the article

  • Message-ID contains multiple '@' characters

    - by Thomaschaaf
    I am looking at my server's spamassasin and have stumbled across the "problem" that my outgoing email sends out its own X-Report and X-Score in the header (programs used are Outlook 2007 as client and exim4 with the vexim plugin and Spamassasin). On the one hand I want to get rid of the sent X-Report which gets sent out with every email and on the other hand I still want to have it for incoming mails. While I was trying to fix this (which I still haven't) I stumbled across this "error" which makes my email less trustworthy: 1.4 MSGID_MULTIPLE_AT Message-ID contains multiple '@' characters How do I get rid of the multiple '@' characters?

    Read the article

  • remove words containing non-alpha characters

    - by dnkb
    Given a text file with space separated string and a tab separated integer, I'd ;like to get rid of all words that have non-alpha characters but keep words consisting of alpha only characters and the tab plus the integer afterwards. My attempts like the ones below didin't yield any good. What I was trying to express is something like: "replace anything within word boundaries that starts and ends with 0 or more whatever and there is at least one :digits: or :punct: in between". sed 's/\b.[:digits::punct:]+.\b//g' sed 's/\b.[^:alpha:]+.\b//g' What am I missing? See sample input data below. Thank you! asdf 754m 563 a2a 754mm 291 754n 463 754 ppp 1409 754pin 4652 pin pin 462 754pins 652 754 ppp 1409 754pin 4652 pi$n pin 462 754/p ins 652 754 pp+p 1409 754 p=in 4652

    Read the article

  • How to read a Text File Hidden Characters?

    - by balexandre
    I've created a text file from an application that I developed. When I send the text file to a SYSTEM Validation, they (3rd Party System) say that the file is invalid and that the file contains 3 characters in the beginning of the file that are not allowed as well special characters are not correct. They also say I need to use either ISO 8859-1 or PC850 Well, I'm using Notepad++ and I can't see that at all! What is the best text file reader for this kind of problems? EDITED I also have a Mac and just thought I remembered opening in TextMate ... WOW! Now I know what they are talking about! How can I have the same in Windows?

    Read the article

  • Public IP shows strange characters and Facebook registers logged-in session to a different location

    - by Stuart Kershaw
    I'm encountering some IP strangeness today and hoping to find an explanation. In short, I'm based in Seattle, WA with my ISP being Comcast. While browsing Facebook's account settings, I noticed that my active session was located to Mount Laurel, NJ. At that point I ran a search in Google for 'my public IP', which returned an interesting result: a string of characters in the following format: 2601:8:b000:xxx:xxxx:xxxx:xxxx:xxxx Normally, a search for my IP returns something like: 67.xxx.xx.xxx A phone call to Comcast got me nowhere, but using Comcast's phone-menu debugging tools, I was able to send a 'refresh signal' to my modem. After that, the search for 'my public IP' yielded the expected result... for about 5 minutes, and then it returned to the new string of characters. Does anyone know of an explanation for this?

    Read the article

  • Why are unicode characters not rendering correctly

    - by sw1nn
    Background: I have some unicode characters in my prompt (git status markers essentially) I'm running urxvt under xfce on arch linux. I'm using DejaVu Sans Mono for Powerline font, specified via .Xresources line: URxvt*font: xft:DejaVu Sans Mono for Powerline:pixelsize=14 When I start urxvt the unicode characters do not render correctly. For example ? renders as â However, if I then start a new urxvt from inside the first terminal everything renders correctly. There doesn't appear to be any difference in the environment between the two terminals. What could be the difference between the first invocation and the nested invocation? I suspect the font is not correct in the 'outer' instance, but I'm unsure how to check the font of a running X window screenshot demonstrates the problem: Note: I moved this question from serverfault.com - i hope this site is more appropriate

    Read the article

  • How to type accented characters in Ubuntu 10.04 with an Apple Aluminum Keyboard

    - by jfmessier
    I installed the latest Ubuntu 10.04 and I used to have the Command, Option or Right-Ctrl keys as compose keys to write accented characters. But I find that under Ubuntu 10.04, the Compose Key is not working, even if I specify the proper Apple Keyboard. Since I cannot work with other keyboard layouts than the plain USA one along with compose keys (I never learned, and I hate, the French layout), this about my only way to input accented characters. I still have to try it with a regular keyboard to see whether there is a difference. Thanks :-)

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >