Search Results

Search found 54956 results on 2199 pages for 'parsing error'.

Page 114/2199 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • jQuery .getJSON() Not Parsing All Objects

    - by Brad
    I'm using jQuery's .getJSON function to parse a set of search results from a Google Search Appliance. The search appliance has an xslt stylesheet that returns the results as JSON data, which I validated with both JSONLint and Curious Concept's JSON Formatter. According to FireBug, the full result set is returned from the XMLHTTPRequest, but I tried dumping the data (with jquery.dump.js) and it only ever parses back the first result. It does successfully get all the Google Search Protocol stuff, but it only ever sees one "R" object (or individual result). Has anybody had a similar problem with jQuery's .getJSON? I know it likes to fail silently if the JSON is not valid, but like I said, I validated the results with several validators and it should be good to go. Edit: Clicking this link will show you the JSON results returned for a search for the word "google": http://bigbird.uww.edu/search?client=json_frontend&proxystylesheet=json_frontend&proxyrefresh=1&output=xml_no_dtd&q=google jQuery only retrieves the first "R" object, even though all "R" objects are siblings.

    Read the article

  • Having trouble parsing XML with jQuery

    - by Jack
    Hi Guys, I'm trying to parse some XML data using jQuery, and as it stands I have extracted the 'ID' attribute of the required nodes and stored them in an array, and now I want to run a loop for each array member and eventually grab more attributes from the notes specific to each ID. The problem currently is that once I get to the 'for' loop, it isn't looping, and I think I may have written the xml path data incorrectly. It runs once and I recieve the 'alert(arrayIds.length);' only once, and it only loops the correct amount of times if I remove the subsequent xml path code. Here is my function: var arrayIds = new Array(); $(document).ready(function(){ $.ajax({ type: "GET", url: "question.xml", dataType: "xml", success: function(xml) { $(xml).find("C").each(function(){ $("#attr2").append($(this).attr('ID') + "<br />"); arrayIds.push($(this).attr('ID')); }); for (i=0; i<arrayIds.length; i++) { alert(arrayIds.length); $(xml).find("C[ID='arrayIds[i]']").(function(){ // pass values alert('test'); }); } } }); }); Any ideas?

    Read the article

  • GRUB install error

    - by Rohit
    Whenever I try to install Ubuntu, I get a fatal error that reads: 'Executing'grub-install /dev/sda' failed. This is a fatal error.' Its the same as this, but my graphics appear to be running fine. Also, I'm a complete novice at this and really need simple instructions to understand what I'm doing. I've tried booting from a LiveCD and a USB stick. I don't want to dual boot it because its an old computer that I erased XP and plan on only using Linux on it. When I used a USB stick and set the persistent file storage high, I was able to run it, but only as long as the flash drive was plugged in.

    Read the article

  • Parsing NSXMLNode Attributes in Cocoa

    - by Jeffrey Kern
    Hello everyone, Given the following XML file: <?xml version="1.0" encoding="UTF-8"?> <application name="foo"> <movie name="tc" english="tce.swf" chinese="tcc.swf" a="1" b="10" c="20" /> <movie name="tl" english="tle.swf" chinese="tlc.swf" d="30" e="40" f="50" /> </application> How can I access the attributes ("english", "chinese", "name", "a", "b", etc.) and their associated values of the MOVIE nodes? I currently have in Cocoa the ability to traverse these nodes, but I'm at a loss at how I can access the data in the MOVIE NSXMLNodes. Is there a way I can dump all of the values from each NSXMLNode into a Hashtable and retrieve values that way? Am using NSXMLDocument and NSXMLNodes.

    Read the article

  • Parsing string logic issue c#

    - by N0xus
    This is a follow on from this question My program is taking in a string that is comprised of two parts: a distance value and an id number respectively. I've split these up and stored them in local variables inside my program. All of the id numbers are stored in a dictionary and are used check the incoming distance value. Though I should note that each string that gets sent into my program from the device is passed along on a single string. The next time my program receives that a signal from a device, it overrides the previous data that was there before. Should the id key coming into my program match one inside my dictionary, then a variable held next to my dictionaries key, should be updated. However, when I run my program, I don't get 6 different values, I only get the same value and they all update at the same time. This is all the code I have written trying to do this: Dictionary<string, string> myDictonary = new Dictionary<string, string>(); string Value1 = ""; string Value2 = ""; string Value3 = ""; string Value4 = ""; string Value5 = ""; string Value6 = ""; void Start() { myDictonary.Add("11111111", Value1); myDictonary.Add("22222222", Value2); myDictonary.Add("33333333", Value3); myDictonary.Add("44444444", Value4); myDictonary.Add("55555555", Value5); myDictonary.Add("66666666", Value6); } private void AppendString(string message) { testMessage = message; string[] messages = message.Split(','); foreach(string w in messages) { if(!message.StartsWith(" ")) outputContent.text += w + "\n"; } messageCount = "RSSI number " + messages[0]; uuidString = "UUID number " + messages[1]; if(myDictonary.ContainsKey(messages[1])) { Value1 = messageCount; Value2 = messageCount; Value3 = messageCount; Value4 = messageCount; Value5 = messageCount; Value6 = messageCount; } } How can I get it so that when programs recives the first key, for example 1111111, it only updates Value1? The information that comes through can be dynamic, so I'd like to avoid harding as much information as I possibly can.

    Read the article

  • Parsing question

    - by j-t-s
    Hi All I have tried using several different parsers as advised by somebody but i don't believe that they'd be of any use for this particular situation. I have a file that looks like this: mylanguagename(main) { OnLoad(protected) { Display(img, text, link); } Canvas(public) { Image img: "Images\my_image.png"; img.Name: "img"; img.Border: "None"; img.BackgroundColor: "Transparent"; img.Position: 10, 10; Text text: "This is a multiline str#ning. The #n creates a new line."; text.Name: text; text.Position: 10, 25; Link link: "Click here to enlarge img."; link.Name: "link"; link.Position: 10, 60; link.Event: link.Clicked; } link.Clicked(sender, link, protected) { Link link: from sender; Message.Display: "You clicked link."; } } ... and I need to be able to parse that code above, so and convert it to a Javascript equivelent, (or JScript). Can somebody please help, or get me started in the right direction? Thanks

    Read the article

  • 'Unable to mount Filesystem' Error

    - by Charles
    Trying to extract data from a 'bricked' Western Digital MyBook Live 2tb drive. I came across a forum that advised to use Ubuntu (booted from a CD) on my Macbook. Managed to download and create a boot CD for Ubuntu (like this little operating system btw). Booted the machine with the CD and plugged the drive (which I had extracted from it's casing and placed into a external USB SATA case & plugged to the laptop). The drive is seen by Ubuntu but each time I click on the drive, it gives me the following error: Unable to mount 2.0 TB Filesystem Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sdb4, missing codepage or helper program, or other error In some cases useful info is found in syslog -try dmesg | tail or so I am new to this and spent quite some time searching this site to see if I could find a solution to this problem without troubling anyone. I came up with a few that came close but some of the questioners mentioned that they had lost data...which scared me from going further. I need to basically extract 1 particular folder from the drive. If I can get to mount this volume 'sdb4', there is a folder called 'My_Work' which I need to back up. The rest I have/had a copy of. When I typed in dmesg | tail...I got several lines..but I think ones that are relevant are: [ 406.864677] EXT4-fs (sdb4): bad block size 65536 [ 429.098776] hfs: write access to a journaled filesystem is not supported, use the force option at your own risk, mounting read-only [ 439.786365] hfs: write access to a journaled filesystem is not supported, use the force option at your own risk, mounting read-only [ 445.982692] EXT4-fs (sdb4): bad block size 65536 [ 1565.841690] EXT4-fs (sdb4): bad block size 65536 I read somewhere to try/check 'sudo fdisk -l /dev/sdb4'. It gave me the following result: Disk /dev/sdb44: 1995.8 GB, 1995774623744 bytes 255 heads, 63 sectors/track, 242639 cylinders, total 3897997312 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb4 doesn't contain a valid partition table This is where I reached and got frustrated and decided to try & get help on this without digging myself deeper into a hole! I understand that the answer may already be out there. If so, could someone please point me in the right direction. And if not, could someone please resolve (if possible) my situation!

    Read the article

  • nginx errors: upstream timed out (110: Connection timed out)

    - by Sparsh Gupta
    Hi, I have a nginx server with 5 backend servers. We serve around 400-500 requests/second. I have started getting a large number of Upstream Timed out errors (110: Connection timed out) Error string in error.log looks like 2011/01/10 21:59:46 [error] 1153#0: *1699246778 upstream timed out (110: Connection timed out) while reading response header from upstream, client: {IP}, server: {domain}, request: "GET {URL} HTTP/1.1", upstream: "http://{backend_server}:80/{url}", host: "{domain}", referrer: "{referrer}" Any suggestions how to debug such errors. I am unable to find a munin plugin to keep a check on number of upstream errors. Sometime the number of errors per day is way too high and somedays its a more decent 3 digit number. A munin graph would probably help us finding out any pattern or correlation with anything else How can we make the number of such error as ZERO

    Read the article

  • Macports 1.8.2 fails to build db46 on os x 1.6.3

    - by themoch
    i'm trying to put a dev environment on my mac, and to do so i need to install several packages which require db46 when running sudo port install db46 i get the following error: ---> Computing dependencies for db46 ---> Fetching db46 ---> Attempting to fetch patch.4.6.21.1 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.2 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.3 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.4 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch db-4.6.21.tar.gz from http://distfiles.macports.org/db4/4.6.21_6 ---> Verifying checksum(s) for db46 ---> Extracting db46 ---> Applying patches to db46 ---> Configuring db46 ---> Building db46 Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_databases_db46/work/db-4.6.21/build_unix" && /usr/bin/make -j2 all " returned error 2 Command output: ../dist/../libdb_java/db_java_wrap.c:9464: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9487: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9509: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9532: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9563: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:9588: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9613: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:9638: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9666: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9691: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9716: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9739: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9771: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9796: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9819: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9842: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9867: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jobject' ../dist/../libdb_java/db_java_wrap.c:9899: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9920: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9943: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9966: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jstring' ../dist/../libdb_java/db_java_wrap.c:9991: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:10010: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:10046: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:10071: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' make: *** [db_java_wrap.lo] Error 1 make: *** Waiting for unfinished jobs.... Note: Some input files use unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. cd ./classes && jar cf ../db.jar ./com/sleepycat Error: Status 1 encountered during processing. i have removed my /usr/local folder completely and it does not seem to help

    Read the article

  • Parsing CSV File to MySQL DB in PHP

    - by Austin
    I have a some 350-lined CSV File with all sorts of vendors that fall into Clothes, Tools, Entertainment, etc.. categories. Using the following code I have been able to print out my CSV File. <?php $fp = fopen('promo_catalog_expanded.csv', 'r'); echo '<tr><td>'; echo implode('</td><td>', fgetcsv($fp, 4096, ',')); echo '</td></tr>'; while(!feof($fp)) { list($cat, $var, $name, $var2, $web, $var3, $phone,$var4, $kw,$var5, $desc) = fgetcsv($fp, 4096); echo '<tr><td>'; echo $cat. '</td><td>' . $name . '</td><td><a href="http://www.' . $web .'" target="_blank">' .$web.'</a></td><td>'.$phone.'</td><td>'.$kw.'</td><td>'.$desc.'</td>' ; echo '</td></tr>'; } fclose($file_handle); show_source(__FILE__); ?> First thing you will probably notice is the extraneous vars within the list(). this is because of how the excel spreadsheet/csv file: Category,,Company Name,,Website,,Phone,,Keywords,,Description ,,,,,,,,,, Clothes,,4imprint,,4imprint.com,,877-466-7746,,"polos, jackets, coats, workwear, sweatshirts, hoodies, long sleeve, pullovers, t-shirts, tees, tshirts,",,An embroidery and apparel company based in Wisconsin. ,,Apollo Embroidery,,apolloemb.com,,1-800-982-2146,,"hats, caps, headwear, bags, totes, backpacks, blankets, embroidery",,An embroidery sales company based in California. One thing to note is that the last line starts with two commas as it is also listed within "Clothes" category. My concern is that I am going about the CSV output wrong. Should I be using a foreach loop instead of this list way? Should I first get rid of any unnecessary blank columns? Please advise any flaws you may find, improvements I can use so I can be ready to import this data to a MySQL DB.

    Read the article

  • Issued by DNOA service access token parsing and validating in Java application

    - by Regfor
    I am creating OAuth 2.0 access token using DotNetOpenAuth, like here public AccessTokenResult CreateAccessToken( IAccessTokenRequest accessTokenRequestMessage) { var token = new AuthorizationServerAccessToken(); token.Lifetime = TimeSpan.FromMinutes(10); var signCert = LoadCert(Config.STS_CERT); token.AccessTokenSigningKey = (RSACryptoServiceProvider) signCert.PrivateKey; var encryptCert = LoadCert(Config.SERVICE_CERT); token.ResourceServerEncryptionKey = (RSACryptoServiceProvider) encryptCert.PublicKey.Key; var result = new AccessTokenResult(token); return result; } Token issued by this method looks like: { "access_token": "gAAAAH44atDAyWeu8BFwhLof7rtBRpiZrSlAC0zci8xU81tXHZDVkBX8LXrMLDHDYfimjuSOsdrXQIAY7Xf4JnK1x_fo_JSmvuiA5CvO5JUJNuEmHNSlR4ePO4tBPkOHQnN50DIRJMbHJdQrFZCqqaWz6s0iuvCuTMcTua6J0yaTPQaD9AAAAIAAAADHgef78SHh4-K2aZ87xYRoRFfmQ0lc3ET7Y5vAS7BadLM5btYvmrSkAWsCxhUji92D0LbKgyVkbQuuw5LnRP_zsxe_W_VztTqZ5m9PwJDL6q7McrUfiVQj_XBQqpv2slBeouD0F1k1KjVedR9Pwm7ganz4R7dmeYivnx8f0_isEGBqSZrtnILoit3SOCPyVxmIwizYwLE2bQOtlwVpqtrBMyzc4MVPVyaSiJb2-Lj5tOftEWl0k93Qmr8uzmjDyeCn3TsFX0f_qFgCmxp32_kt4ZTMf4zgmh5yUS1Hy7ERNQxpCIxRTx9yma7JN_K5Pss", "token_type": "bearer", "expires_in": 43200, } I need to know whether Java application will be able to parse and validate token issued in such manner?

    Read the article

  • How to handle building and parsing HTTP URL's / URI's / paths in Perl

    - by Robert S. Barnes
    I have a wget like script which downloads a page and then retrieves all the files linked in img tags on that page. Given the URL of the original page and the the link extracted from the img tag in that page I need to build the URL for the image file I want to retrieve. Currently I use a function I wrote: sub build_url { my ( $base, $path ) = @_; # if the path is absolute just prepend the domain to it if ($path =~ /^\//) { ($base) = $base =~ /^(?:http:\/\/)?(\w+(?:\.\w+)+)/; return "$base$path"; } my @base = split '/', $base; my @path = split '/', $path; # remove a trailing filename pop @base if $base =~ /[[:alnum:]]+\/[\w\d]+\.[\w]+$/; # check for relative paths my $relcount = $path =~ /(\.\.\/)/g; while ( $relcount-- ) { pop @base; shift @path; } return join '/', @base, @path; } The thing is, I'm surely not the first person solving this problem, and in fact it's such a general problem that I assume there must be some better, more standard way of dealing with it, using either a core module or something from CPAN - although via a core module is preferable. I was thinking about File::Spec but wasn't sure if it has all the functionality I would need.

    Read the article

  • Need help parsing HTML with a regex in python

    - by laspal
    Hi, My string is mystring = "<tr><td><span class='para'><b>Total Amount : </b>INR (Indian Rupees) 100.00</span></td></tr>" My problem here is I have to search and get the total amount test = re.search("(Indian Rupees)(\d{2})(?:\D|$)", mystring) but my test give me None. How can I get the values and values can be 10.00, 100.00, 1000.00 Thanks

    Read the article

  • Regular expression help parsing SQLIO output

    - by jaspernygaard
    Hi I've been working on a regular expression to parse the output of a series of SQLIO runs. I've gotten pretty far, but not quite there yet. I'm seeking a 100% regex solution and no pre-manipulation of the input. Could anyone assist with a little guidance with the following regular expression: .*v(?<SQLIOVersion>\d\.\d).*\n.*\n(?<threads>\d*)\s.*for\s(?<Seconds>\d+).*\n.*using\s(?<clustersize>[0-9]*)KB.*\n.*\n.*size:\s(?<currentfilesize>\d+).*\n.*\n.*\n.*\n.*\s(?<IOs>\d*\.\d*).*\n.*\s(?<MBs>\d*\.\d*).*\n.*\n.*\s(?<MinLatency_ms>\d+).*\n.*\s(?<AvgLatency_ms>\d+).*\n.*\s(?<MaxLatency_ms>\d+).*\n.*\n.*\n\%\:..(?<ms>\d*\s+)* Here's a snippet of the output - note the headers, which change during the SQLIO batch run: File

    Read the article

  • Need Help to fix hmtl.sty not found error

    - by GGS
    I installed texlive 2012 on ubuntu 12.04 LTS 64 bit machine following the instructions given in the following web How do I install the latest TeX Live 2012? After, a successful installation( I think), I got the following error when I do a pdflatex to compile a give tex file This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian) restricted \write18 enabled. entering extended mode (./user_guide.tex LaTeX2e <2011/06/27 Babel and hyphenation patterns for english, dumylang, nohyphenation, lo aded. (/usr/share/texlive/texmf-dist/tex/latex/base/article.cls Document Class: article 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texlive/texmf-dist/tex/latex/base/size12.clo)) ! LaTeX Error: File `html.sty' not found. Type X to quit or to proceed, or enter new name. (Default extension: sty) so would you help me in getting a solution? Thank you in advance

    Read the article

  • parsing/matching string occurrence in C

    - by David
    I have the following string: const char *str = "\"This is just some random text\" 130 28194 \"Some other string\" \"String 3\"" I would like to get the the integer 28194 of course the integer varies, so I can't do strstr("20194"). So I was wondering what would be a good way to get that part of the string? I was thinking to use #include <regex.h> which I already have a procedure to match regexp's but not sure how the regexp in C will look like using the POSIX style notation. [:alpha:]+[:digit:] and if performance will be an issue. Or will it be better using strchr,strstr? Any ideas will be appreciate it

    Read the article

  • parsing urls from windows batch file

    - by modest
    I have a text file (myurls.txt) whose contents are a list of URLs as follow: Slides_1: http://linux.koolsolutions.com/svn/ProjectA/tags/REL-1.0 Exercise_1: http://linux.koolsolutions.com/svn/ProjectA/tags/REL-1.0 Slides_2: http://linux.koolsolutions.com/svn/oldproject/ProjectB/tags/REL-2.0 Exercise_2: http://linux.koolsolutions.com/svn/ProjectB/tags/REL-1.0 Exercise_3: http://linux.koolsolutions.com/svn/BlueBook/ProjectA/tags/REL-1.0 Now I want to parse this text file in a for loop such that after each iteration (for e.g. take the first url from the above file) I have the following information into different variables: %i% = REL-1.0 %j% = http://linux.koolsolutions.com/svn/ProjectA %k% = http://linux.koolsolutions.com/svn/ProjectA/tags/REL-1.0 After some experiment I have the following code but it only works (kind of) if the URLs have same number of slashes: @echo off set FILE=myurls.txt FOR /F "tokens=2-9 delims=/ " %%i in (%FILE%) do ( @REM <do something with variables i, j and k.> ) I am fine with other solutions like for e.g. using Windows Script Host/VBS script as long as it can run with a default Windows XP/7 installation. In other words, I know I can use awk, grep, sed, python, etc. for Windows and get the job done but I don't want the users to have to install anything besides a standard windows installation.

    Read the article

  • getURL, parsing web-site with german special characters

    - by Kay
    I am using getURL() and htmlParse() - how can I make web-site content with special characters to be displayed properly? library(RCurl); library(XML) script <- getURL("http://www.floraweb.de/pflanzenarten/foto.xsql?suchnr=814") doc <- htmlParse(script, encoding = "UTF-8") xpathSApply(doc, "//div[@id='content']//p", xmlValue)[2] [1] "Bellis perennis L., Gänseblümchen" # should say: [1] "Bellis perennis L., Gänseblümchen" > Sys.getlocale() [1] "LC_COLLATE=German_Austria.1252;LC_CTYPE=German_Austria.1252;LC_MONETARY=German_Austria.1252;LC_NUMERIC=C;LC_TIME=German_Austria.1252"

    Read the article

  • How do I make BeautifulSoup parse the contents of textarea tags as HTML?

    - by brofield
    Before 3.0.5, BeautifulSoup used to treat the contents of <textarea as HTML. It now treats it as text. The document I am parsing has HTML inside the textarea tags, and I am trying to process it. I've tried: for textarea in soup.findAll('textarea'): contents = BeautifulSoup.BeautifulSoup(textarea.contents) textarea.replaceWith(contents.html(text=True)) But I'm getting errors. I can't find this in the documentation, and the alternative parsers aren't helping. Anyone know how I can parse the textareas as HTML?

    Read the article

  • Parsing a comma-separated list

    - by alex
    I have a comma-separated list of values, for example: strins s = "param1=true;param2=4;param3=2.0f;param4=sometext;"; I need a functions: public bool ExtractBool(string parameterName, string @params); public int ExtractInt(string parameterName, string @params); public float ExtractFloat(string parameterName, string @params); public string ExtractString(string parameterName, string @params); Is there a special functions in .net that can help me with csl ? PS: parameter names are equal within a list.

    Read the article

  • Error after installing Ubuntu 12.04 using Wubi

    - by KJ50
    After using Windows Ubuntu Installer from within Windows, I am prompted to restart, so I follow the directions. When I try to start Ubuntu after restarting, the desktop background appears, but then a loading bar with this title appears. Verifying the installation configuration... While this is loading, an error window pops up that says No root file system is defined Please correct this from the partitioning menu There is only an 'Ok' button available to click, and if I click that the same error window appears. I do not know how to get to the "partitioning menu" from this state, so the only option I have is to shut down my computer. What can I do so that Ubuntu finds a "root file system"? Can I diagnose this problem via Windows? Does anyone have any insight? FYI - I am using a new ultrabook with 6GB RAM, Intel i7 3rd gen processor, and no CD/DVD drive.

    Read the article

  • 12.10 Grub-customizer error

    - by SteveK
    I am trying to use grub-customizer in 12.10 which ran in 12.04. I now get error grub-mkconfig couldn't be executed successfully. error message: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.5.0-18-generic Found initrd image: /boot/initrd.img-3.5.0-18-generic Found linux image: /boot/vmlinuz-3.2.0-32-generic-pae Found initrd image: /boot/initrd.img-3.2.0-32-generic-pae Found linux image: /boot/vmlinuz-3.5.0-18-generic Found initrd image: /boot/initrd.img-3.5.0-18-generic Found linux image: /boot/vmlinuz-3.2.0-32-generic-pae Found initrd image: /boot/initrd.img-3.2.0-32-generic-pae Found memtest86+ image: /boot/memtest86+.bin Found memtest86+ image: /boot/memtest86+.bin Found Windows Recovery Environment (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 I have removed and reinstalled it to no avail. steve@steve-Ubuntu:~$ grub-mkconfig --version grub-mkconfig (GRUB) 2.00-7ubuntu11 I also noticed that file device.map does not exits but in other forums read that it is not in 12.10. Help please

    Read the article

  • Parsing EXIF's "ExposureTime" using PHP

    - by MarkL
    Re, One photo with exposure being 1/640 has the EXIF field of "ExposureTime" eq. "15625/10000000". I am not sure why some photos display this value in a readable format (e.g., "1/100"), but I need to convert this "15625" back to "1/640". How? :) Thanks.

    Read the article

  • parsing email text reply/forward

    - by Theofanis Pantelides
    Hi, I am creating a web based email client using c# asp.net. What is confusing is that various email clients seem to add the original text in alot of different ways when replying by email. What I was wondering is that, if there is some sort of standardized way, to disambiguate this process? Thank you -Theo

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >