Search Results

Search found 5222 results on 209 pages for 'characters'.

Page 80/209 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • form Validation white page not showing errors

    - by Jess McKenzie
    In the example below I am wanting to do all of the 'safety' checks on the $_POST variables but it seems when I click submit I get a white page why? I am wanting it to show the errors etc Form Process: /* check if the form is submitted */ if (isset($_POST['submitButton'])) { $fullName = $_POST['fullname']; if($_SERVER['REQUEST_METHOD'] == 'POST' && !empty($fullName)) { if (!ctype_alpha(str_replace(array("'", "-"), "",$fullName))) { $errorfullName .= '<span class="errorfullName">*First name should be alpha characters only.</span>'; } if (strlen($fullName) < 3 OR strlen($fullName) > 40) { $errorfullName .= '<span class="errorfullName">*First name should be within 3-40 characters long.</span>'; } } }

    Read the article

  • String Functions in IIS Url Rewritting Module

    - by Nariman
    The IIS URL Rewrite Module ships with 3 built-in functions: * ToLower - returns the input string converted to lower case. * UrlEncode - returns the input string converted to URL-encoded format. This function can be used if the substitution URL in rewrite rule contains special characters (for example non-ASCII or URI-unsafe characters). * UrlDecode - decodes the URL-encoded input string. This function can be used to decode a condition input before matching it against a pattern. The functions can be invoked by using the following syntax: {function_name:any_string} The question is: can this list be extended by introducing a Replace function that's available for changing values within a rewrite rule action or condition?

    Read the article

  • CURL issue in PHP while getting location list

    - by Ajay
    I am retrieving the nearest locations available from a given address (Longitude/Latitude) from geolocation website. It works fine, but for some places it gives junk characters in the name. Moreover, in browser I am getting different characters compared to my PHP CURL functionality. Here is the URL http://www.geoplugin.net/extras/nearby.gp?lat=17.7374669&long=83.3214858&limit=5&radius=50&format=php One of the location is "Sitampeta" in original location name, but in browser I am getting "Sitammapeta" where as in CURL function I am getting "Sītammapeta". Please tell me why this difference. I wrote a function to convert browser output to original which works fine. function convert ($old) { $n=""; for ($i=0; $i<strlen($old); $i++) { $n .= chr(ord(substr($old,$i,1))); } return $n; } But I dont understand how I convert the CURL output to original name.

    Read the article

  • how to implement a game character task queue

    - by Stephen Lee Parker
    I'm working on a personal game engine in C# and need to give certain characters / sprites responses to conditions or certain patterns that they follow and since these patterns will be repeated over and over for other characters / sprites, I don't want to tie the patterns to the character / sprite. I will likely want to define the conditions / actions in level data files... I plan to use this for platformers, space shooters, and pack man like games... Almost an extenable AI system. Any suggestions on how this can be implemented?

    Read the article

  • How to convert UTF-8 and Unicode to normal text ?

    - by Mehdi Amrollahi
    I have a downloader program that download pages from internet . the encoding of each page is different , some are in UTF-8 and some are Unicode. For example : &#97; that shows 'a' character ; pages full of this characters .We should convert this encodings to normal text . I used the UnicodeEncoding class in c# , but they do not help me . How can i decode this encodings to real characters? Is there a class or method that converting this ? Thanks .

    Read the article

  • Python: Huge file reading by using linecache Vs normal file access open()

    - by user335223
    Hi, I am in a situation where multiple threads reading the same huge file with mutliple file pointers to same file. The file will have atleast 1 million lines. Eachline's length varies from 500 characters to 1500 characters. There won't "write" operations on the file. Each thread will start reading the same file from different lines. Which is the efficient way..? Using the Python's linecache or normal readline() or is there anyother effient way?

    Read the article

  • getTextContent from Node with whitespace character normalization

    - by Nayn
    Hi, I am working with XPATH, Java and want to extract some text out of one html page. The text is located under some div with some whitespace characters in between, like &nbsp; <br> etc. I want these to be converted into 'space' and 'newline' respectively while extracting. The method I am using to extract text is Element.getTextContent() which does not respect whitespace characters. Could somebody tell me if there is a way to extract text with whitespace normalization OR Extract whole html markup under the 'Node' so that i could replace it by myself. Thanks Nayn

    Read the article

  • How to find validity of a string of parentheses, curly brackets and square brackets?

    - by Rajendra
    I recently came in contact with this interesting problem. You are given a string containing just the characters '(', ')', '{', '}', '[' and ']', for example, "[{()}]", you need to write a function which will check validity of such an input string, function may be like this: bool isValid(char* s); these brackets have to close in the correct order, for example "()" and "()[]{}" are all valid but "(]", "([)]" and "{{{{" are not! I came out with following O(n) time and O(n) space complexity solution, which works fine: Maintain a stack of characters. Whenever you find opening braces '(', '{' OR '[' push it on the stack. Whenever you find closing braces ')', '}' OR ']' , check if top of stack is corresponding opening bracket, if yes, then pop the stack, else break the loop and return false. Repeat steps 2 - 3 until end of the string. This works, but can we optimize it for space, may be constant extra space, I understand that time complexity cannot be less than O(n) as we have to look at every character. So my question is can we solve this problem in O(1) space?

    Read the article

  • how to add language support to android

    - by avar
    hello i hope someone put me on the right direction with my problem i want to work on supporting my language, specially the writing on android we use Arabic characters with little modification ( like parsi, urdu etc).. i was hoping android has fully supported Arabic , then i would make changes to work with my language. but even 2.2 dosn't support Arabic . it just show Arabic characters and it is not connected. that is "????? " is displayed "? ? ? ? ?". some individuals have made Arabic support on CyanogenMod ROM's for some HTC phones, but they would not tell how they made it. I'v got the android source code, i want to know where to start, where to make changes , what is the library that handle fonts , shaping engine etc .. ps : android.com and other android related google groups are blocked in my country.

    Read the article

  • python dictionary conversion from string?

    - by shahjapan
    if I've string like "{ partner_name = test_partner}" OR " { partner_name : test_partner } its an example string will be very complex with several special characters included like =, [ , ] , { , } what will be the best way to convert it into a python object - so I can process it I tried with eval but it requires " ' " for string, but how can we add this special character \' before starting and ending of every word, I tried regular express re.findal('\w+') but it fails when my string contains ' _ ' or like characters as it will separate the string by ' _ ' Object of this question is my application needs, user friendly language as input - and I thought Json Dict will be good - but user is lazzy to put " ' " before and after of each string... then I thought for yaml but its also complex, if anybody can suggest better user friendly input which I use as python object - then please help me out.

    Read the article

  • String Functions in IIS Url Rewrite Module

    - by Nariman
    The IIS URL Rewrite Module ships with 3 built-in functions: * ToLower - returns the input string converted to lower case. * UrlEncode - returns the input string converted to URL-encoded format. This function can be used if the substitution URL in rewrite rule contains special characters (for example non-ASCII or URI-unsafe characters). * UrlDecode - decodes the URL-encoded input string. This function can be used to decode a condition input before matching it against a pattern. The functions can be invoked by using the following syntax: {function_name:any_string} The question is: can this list be extended by introducing a Replace function that's available for changing values within a rewrite rule action or condition? Another way to frame the question: is there any way to do a global replace on a URL coming in using this module? It seems that you're limited to using regular expressions and back-references to construct strings, without a search/replace functionality to replace every X with Y in {REQUEST_URI} before issuing a redirect.

    Read the article

  • Making Django ignore string literals

    - by James
    UPDATE: It turns out this is a deeper question than I thought at first glance - the issue is that python is replacing the string literals before they ever get to django. I will do more investigating and update this if I find a solution. I'm using django to work with LaTeX templates for report generation, and am running into a lot of problems with the way Django replaces parts of strings. Specficially, I've run into two problems where I try to insert a variable containing latex code. The first was that it would replace HTML characters, such as the less than symbol, with their HTML codes, which are of course gibberish to a LaTeX interpreter. I fixed this by setting the context to never autoescape, like so: c = Context(inputs) c.autoescape = False However, I still have my second issue, which is that Django replaces string literals with their corresponding characers, so a double backslash becomes \, and \b becomes a backspace. How can I force Django to leave these characters in place, so inputs['variable'] = '{\bf this is code} \\' won't get mangled when I use {{variable}} to reference it in the django template?

    Read the article

  • Split string into multiple lines

    - by RememberME
    I have a long string of comments that I'd like to split into multiple lines. It's currently displayed as <%= Html.Encode(item.important_notes) %> I've played with using .Substring to split it, but can't figure out how to prevent it from splitting in the middle of a word. Instead of characters 1-100 on line 1 and 101-200 on line 2, I'd like to do something like character 1 through the last space before character 100 on line one. That character through the last space before the next 100 characters on line 2, etc. What is the best way to do this? EDIT: using ASP.NET-MVC

    Read the article

  • Help with proper character encoding.

    - by mmattax
    I have a HTML form that is sometimes submitted with accented characters: à, è, ì, ò, ù I have a PHP script that exports these form submissions into CSV format, when I look at the CSV format in a text editor (vim or notepad for example) the characters look fine, but when opened with Open Office or Word, I get some funky results: ????? I am also passing these submission to salesforce and am getting an error: "The entity "Atilde" was referenced, but not declared." What can I do to ensure portability of my CSV file? What's the proper way to handle the encoding? My HTML file is content-type is set as: Content-Type: text/html; charset=utf-8 Data is being stored in MySQL as latin1_swedish_ci collation.

    Read the article

  • SQL Server - Percent based Full Text Search

    - by Sukhminder Singh
    Hi I want to conduct search on a particular column of a table in such a way that returning result set should satify following 2 conditions: Returning result set should have records whose 90% of the characters matches with the given search text. Returning result set should have records whose 70% of the consecutive characters matches with the given search text. It implies that when 10 character word Sukhminder is searched, then: it should return records like Sukhmindes, ukhminder, Sukhmindzr, because it fulfils both of the above mentioned conditions. But it should not return records like Sukhmixder because it does not fulfil the second condition. Likewise, It should not return record Sukhminzzz because it does not fulfil the first condition. I am trying to use Full Text Search feature of SQL Server. But, could not formulate the required query yet. Kindly reply ASAP.

    Read the article

  • How to trigger an event in input text after I stop typing/writting?

    - by user1386320
    I want to trigger event just right after I stop typing (not while typing) characters in my input textbox. I've tried with: $('input#username').keypress(function() { var _this = $(this); // copy of this object for further usage setTimeout(function() { $.post('/ajax/fetch', { type: 'username', value: _this.val() }, function(data) { if(!data.success) { // continue working } else { // throw an error } }, 'json'); }, 3000); }); But this example produces a timeout for every typed character and I get about 20 AJAX requests if I type-in 20 characters. On this fiddle I demonstrate the same problem with a simple alert instead of an AJAX. Is there a solution for this or I'm just using a bad approach for this?

    Read the article

  • How do I count the number of bytes read by TextReader.ReadLine()?

    - by Steve Guidi
    I am parsing a very large file of records (one per line, each of varying length), and I'd like to keep track of the number of bytes I've read in the file so that I may recover in the event of a failure. I wrote the following: string record = myTextReader.ReadLine(); bytesRead += record.Length; ParseRecord(record); However this doesn't work since ReadLine() strips any CR/LF characters in the line. Furthermore, a line may be terminated by either CR, LF, or CRLF characters, which means I can't just add 1 to bytesRead. Is there an easy way to get the actual line length, or do I write my own ReadLine() method in terms of the granular Read() operations?

    Read the article

  • Replacing whitespace with sed in a CSV (to use w/ postgres copy command)

    - by Wells
    I iterate through a collection of CSV files in bash, running: iconv --from-code=ISO-8859-1 --to-code=UTF-8 ${FILE} | \ sed -e 's/\"//g' | \ sed -e 's/, /,/g' \ > ${FILE}.utf8 Running iconv to fix UTF-8 characters, then the first sed call removes the double quote characters, and the final sed call is supposed to remove leading and trailing whitespace around the commas. HOWEVER, I still have a line like this in the saved file: FALSE,,,, 2.40,, The COPY command in postgres is kind of dumb, so it thinks " 2.40" is not valid syntax for a numeric value. Where am I going wrong w/ my processing of the CSV file? Thanks!

    Read the article

  • Complex regular expression

    - by Jose3d
    Hello, i will like to capture a substring part of a text choosing the number of characters but if any word is cut then get until de last blank. As example if this is the text: "This is an example of text lorem ipsum, etc..." and i would like to get for instance 12 characters that are: "This is an e". In this case example is cutted, then i would like to get "This is an". Its possible do this with Regular Expressions? Thanks in advance. Jose

    Read the article

  • PHP Count the lenght of each value in a array/string (tags)

    - by 2by
    Users writing an article have the option to write some tags, tags are written like this: tag1, tag2, tag3 So tags are stored like: $tags = "tag1, tag2, tag3"; I want to make sure, every tag has a minimum of 3 characters, so i need to validate the tags. I have tried this: $tagsstring = explode(",", $tags); $tagslength = array_map('strlen', $tagsstring); if (min($tagslength) < 3) { echo "Error... Each tag has to be at least 3 characters."; } It seems to work, sometimes... But of you write: tag1, df It wont give an error. Any suggestions?

    Read the article

  • Reading bytes from JavaScript string

    - by Jan
    I have a string containing binary data in JS. Now I want to read, for example, an integer from it. So I get the first 4 characters, use charCodeAt, do some shifting etc. to get an integer. Problem is that strings in JS are UTF-16 (instead of ASCII) and charCodeAt often returns values higher than 256. The Mozilla reference states that "The first 128 Unicode code points are a direct match of the ASCII character encoding." (what about ASCII values 128?) How can I convert the result of charCodeAt to an ASCII value? Or is there a better way to convert a string of four characters to a 4 byte integer?

    Read the article

  • UTF8 issues on Linux

    - by user363808
    Hi, I have some code that fetches some data from the database, database codepage is UTF8. When I run the code on a linux box, some characters come out as question marks (?) but when I run the same code on a windows server, all characters appear correctly. When I do: $ $LANG Following is returned en_SG.UTF-8 en_SG is something that doesn't look correct, it should be en_US but the latter part of the returned string is UTF-8 which is good. Is there anything else that I can look into to fix the character corruption problem?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >