Search Results

Search found 6384 results on 256 pages for 'cgi parse qs'.

Page 175/256 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Youtube python: get thumbnail

    - by dkgirl
    Is there a simple way to get the default thumbnail from a youtube entry object gdata.youtube.YouTubeVideoEntry? I tried entry.media.thumbnail, but that gives me four thumbnail objects. Can I always trust that there are four? Can I know which is the default thumbnail that would also appears on the youtube search page? And how would I get that one? Or do I have to alter one of the other ones? When I know the video_id I use: http://i4.ytimg.com/vi/{{video_id}}/default.jpg so, it would also be helpful to get the video_id. Do I really have to parse one of the url's to get at the video_id ? It seems strange that they don't provide this information directly.

    Read the article

  • How to use a regular expression and assign the result to variables in Android?

    - by ChengYing
    I have a string named s_Result which will be parsed from the Internet. The format may be "Distance: 2.8km (about 9 mins)", and there are 4 variables which are f_Distance, m_DistanceUnit, f_timeEst, m_timeEstUnit. My question is how to parse s_Result and assign 2.8, km, 9, mins into f_Distance, m_distanceUnit, f_timeEst and m_timeEstUnit respectively using regular expression? I tried using "\d+(\.\d+)?" in RegEx Tester and the result showed 2 matches found, but if I use "\\d+(\\.\\d+)?" in Android code, it showed no matches! Any suggestions what might be going wrong?

    Read the article

  • My Lucene queries only ever find one hit

    - by Bob
    I'm getting started with Lucene.Net (stuck on version 2.3.1). I add sample documents with this: Dim indexWriter = New IndexWriter(indexDir, New Standard.StandardAnalyzer(), True) Dim doc = Document() doc.Add(New Field("Title", "foo", Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.NO)) doc.Add(New Field("Date", DateTime.UtcNow.ToString, Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.NO)) indexWriter.AddDocument(doc) indexWriter.Close() I search for documents matching "foo" with this: Dim searcher = New IndexSearcher(indexDir) Dim parser = New QueryParser("Title", New StandardAnalyzer()) Dim Query = parser.Parse("foo") Dim hits = searcher.Search(Query) Console.WriteLine("Number of hits = " + hits.Length.ToString) No matter how many times I run this, I only ever get one result. Any ideas?

    Read the article

  • Lexing newlines in scala StdLexical?

    - by Nick Fortescue
    I'm trying to lex (then parse) a C like language. In C there are preprocessor directives where line breaks are significant, then the actual code where they are just whitespace. One way of doing this would be do a two pass process like early C compilers - have a separate preprocessor for the # directives, then lex the output of that. However, I wondered if it was possible to do it in a single lexer. I'm pretty happy with writing the scala parser-combinator code, but I'm not so sure of how StdLexical handles whitespace. Could someone write some simple sample code which say could lex a #include line (using the newline) and some trivial code (ignoring the newline)? Or is this not possible, and it is better to go with the 2-pass appproach?

    Read the article

  • Benefit of outputting JSON as opposed to plain HTML

    - by Franco
    Hey guys, Just wondering which is best here. I want to output data from a table in my DB then put a lot of this data into a html table on the fly on my page. Im working with Java on the server side. Basically I pull the results form the DB and have the raw data..just what next? There is a chance I may want to take data from multiple tables in order to combine it into one table for my site. I retrieve the results of the query from the DB, now do i create a text from it in the form of json which i can parse as json using jquery upon the return of the object to my browser?(kind of a sub question of this question: Is just using a stringbuilder the correct way to make a json object to output?) Or.. Should i build the HTML as a string and output that to the browser instead? Which is better and why? Thanks in advance!

    Read the article

  • c# multi threaded file processing

    - by user177883
    There is a folder that contains 1000 of small text files. I aim to parse and process all of them while more files are being populated in to the folder. My intention is to multithread this operation as the single threaded prototype took 6 minutes to process 1000 files. I like to have reader and writer thread(s) as following : while the reader thread(s) are reading the files, I d like to have writer thread(s) to process them. Once the reader is started reading a file, I d like to mark it as being processed, such as by renaming it, once it s read, rename it to completed. How to approach such multithreaded application ? Is it better to use a distributed hash table or a queue? Which data structure to use that would avoid locks? Would you have a better approach to this scheme that you like to share?

    Read the article

  • Security header is not valid - using curl php

    - by toni
    Hi all, Im implementing the Express Checkout, Paypal API using PHP. I have no problem with the first step:SetExpressCheckout. I a have awk=success. But in method GetExpressCheckout I get "Security header is not valid". I try to figure out the problem and i think found out maybe it was the curl not working well.. What i did i copy the whole URL: https://api-3t.sandbox.paypal.com/nvp?USER=sanbox_1276609583_biz_api1.gmail.com&PWD=1276609589&SIGNATURE=AYVosblmD7khKkvvb.bNxvFT0OQ2A8GopwByEuC.CfMHt65VaUmvAEy-&VERSION=62.0&token=EC-3YG18670X88588437&METHOD=GetExpressCheckoutDetails and paste it to the browser. This will result to: TOKEN=EC%2d3YG18670X88588437&CHECKOUTSTATUS=PaymentActionNotInitiated&TIMESTAMP=2010%2d06%2d16T07%3a40%3a12Z&CORRELATIONID=e1a1e469bf066&ACK=Success&VERSION=62%2e0&BUILD=1356926... But when that url executed in the function I made it will not work. Below is my function: function mycurl($url,$querystr){ $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $querystr); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); $response = curl_exec($ch); curl_close ($ch); return $response; } I hope somebody can help on this. thanks so much. Note: - I Used the sandbox for this. I created a sandbox account, I have a Business account to represent a merchant, and a Personal account to represent a buyer. And I used this: endpoint url: api-3t.sandbox.paypal.com/nvp sandbox url: www.sandbox.paypal.com/cgi-bin/webscr This should not be the issue.

    Read the article

  • Parsing an Open XML doc via styled blocks

    - by Chris B. Behrens
    I'm working with docx docs, and I need to parse a document into sections on the basis of headings styled with the "heading 1" style. So if I had a doc like this (markup is pseudocode): <doc> <title style>Doc Title</title style> <heading1>First Section</heading1> ... <heading2>Second Section</heading2> ... <heading3>Third Section</heading3> ... </doc> I'd want to break this into a doc with four sections, the first being the content that precedes the first section. I figure that this is probably pretty simple once you're familiar with Open XML, but I am not. TIA.

    Read the article

  • How can I capture a multiline pattern using a regular expressions in java?

    - by lampShade
    I have a text file that I need to parse using regular expressions. The text that I need to capture is in multiline groups like this: truck zDoug Doug's house (123) 456-7890 [email protected] 30 61234.56 8/10/2003 vehicle eRob Rob's house (987) 654-3210 [email protected] For this example I need to capture truck followed by the next seven lines.In other words, in this "block" I have 8 groups. This is what I've tried but it will not capture the next line: (truck)\n(\w). NOTE: I'm using the program RegExr to test my regex before I port it to Java.

    Read the article

  • Connecting to SQL Server 2005 via Web Service

    - by clear-cycle-corp
    Delphi 2010, dbExpress, and SQL Server 2005 DB I am trying to make a connection to a SQL Server 2005 DB using Delphi 2010 & DBExpress. If I create a standard delphi application and hard code my connection (IT WORKS!): procedure TForm1.Button1Click(Sender: TObject); var Conn: TSQLConnection; begin Conn:= TSQLConnection.Create(nil); Conn.ConnectionName:= 'VPUCDS_VPN_SE01'; Conn.LoadParamsOnConnect := True; Conn.LoginPrompt:=True; try Conn.Connected:= True; if Conn.Connected then ShowMessage('Connected!') else ShowMessage('NOT Connected!') finally Conn.Free; end; end; All the ini files, and DLLs reside in the same folder as my executable and yes, I have DBXMsSQL & MidasLib in the uses clause again, it works if its not a web service! However, if i then move the code over to a Web services CGI module: function TTest.ConnectToDB: Boolean;stdcall; var Conn: TSQLConnection; begin Conn:= TSQLConnection.Create(nil); Conn.ConnectionName:= 'VPUCDS_VPN_SE01'; Conn.LoadParamsOnConnect := True; Conn.LoginPrompt:=True; try Conn.Connected:= True; result:= Conn.Connected; finally Conn.Free; end; end; Thanks

    Read the article

  • Handling Incoming Data from Multiple Sockets in Python

    - by user859434
    Background: I have a current implementation that receives data from about 120 different socket connections in python. In my current implementation, I handle each of these separate socket connections with a dedicated thread for each. Each of these threads parse the data and eventually store it within a shared locked dictionary. These sockets DO NOT have uniform data rates, some sockets get more data than others. Question: Is this the best way to handle incoming data in python, or does python have a better way on handling multiple sockets per thread?

    Read the article

  • Testing a (big) collection retrieved from a db

    - by Bas
    I'm currently doing integration testing on a live database and I have the following sql statement: var date = DateTime.Parse("01-01-2010 20:30:00"); var result = datacontext.Repository<IObject>().Where(r => r.DateTime > date).First(); Assert.IsFalse(result.Finished); I need to test if the results retrieved from the statement, where the given date is less then the date of the object, have Finished set to False. I do not know how many results I get back and currently I'm getting the first object of the list and check if that object has Finished set to false. I know testing only the first item of the list is not valid testing, as a solution for that I could iterate through the list and check all items on Finished, but putting logic in a Test is kinda going against the concept of writing 'good' tests. So my question is: Does anyone have a good solution of how to properly test the results of this list?

    Read the article

  • I can't use dashes in XPATH on iphone using libxml2

    - by user249488
    I'm trying to parse HTML data using KissXML for iphone. I've noticed that I can't have dashes in the id() tag, otherwise it won't evaluate. For example, if I'm trying to get the element at I would do id("foo") However, if I try to get at element , and I try id("foo-bar") the libxml2 XPATH engine doesn't seem to return anything. It works using the XPATH check for firefox, though. Anyone run into this issue and know of a reason why it's happening or have a workaround (besides using the absolute XPATH path?)

    Read the article

  • PHP Populating array with $variables

    - by Tom
    Hi, I'm trying to create a basic shopping cart, having an issue with the product page allowing users to add more items to their cart then are in stock (I have code in place to prevent this on the view cart page, just not the view product page) This is what I have so far; for ($i = 0; $i < $numItem; $i++) { extract($cartContent[$i]); $subTotal += $price * $cartQuantity; $cartLimiter[$itemNo => $cartQuantity]; Using an array so the position number becomes the item number and the cart quantity becomes the assigned value, however it doesn't seem to like it and throws out on the bottom line of code: Parse error: syntax error, unexpected T_DOUBLE_ARROW, expecting ']' Thanks

    Read the article

  • XML Parsing in Groovy strips attribute new lines

    - by Bill James
    I'm writing code where I retrieve XML from a web api, then parse that XML using Groovy. Unfortunately, it seems that both XmlParser and XmlSlurper for Groovy strip newline characters from the attributes of nodes when .text() is called. How can I get at the text of the attribute including the newlines? Sample code: def xmltest = ''' <snippet> <preSnippet att1="testatt1" code="This is line 1 This is line 2 This is line 3" > <lines count="10" /> </preSnippet> </snippet>''' def parsed = new XmlParser().parseText( xmltest ) println "Parsed" parsed.preSnippet.each { pre -> println pre.attribute('code'); } def slurped = new XmlSlurper().parseText( xmltest ) println "Slurped" slurped.children().each { preSnip -> println [email protected]() } the output of which is: Parsed This is line 1 This is line 2 This is line 3 Slurped This is line 1 This is line 2 This is line 3

    Read the article

  • Download and write .tar.gz files without corruption.

    - by arbales
    I've tried numerous ways of downloading files, specifically .zip and .tar.gz, with Ruby and write them to the disk. I've found that the file appears to be the same as the reference (in size), but the archives refuse to extract. What I'm attempting now is: Thanks! def download_request(url, filePath:path, progressIndicator:progressBar) file = File.open(path, "w+") begin Net::HTTP.get_response URI.parse(url) do |response| if response['Location']!=nil puts 'Direct to: ' + response['Location'] return download_request(response['Location'], filePath:path, progressIndicator:progressBar) end # some stuff response.read_body do |segment| file.write(segment) # some progress stuff. end end ensure file.close end end download_request("http://github.com/jashkenas/coffee-script/tarball/master", filePath:"tarball.tar.gz", progressIndicator:nil)

    Read the article

  • Good XMPP Java Libraries for server side?

    - by Taylor Gautier
    I was hoping to implement a simple XMPP server in Java. What I need is a library which can parse and understand xmpp requests from a client. I have looked at Smack (mentioned below) and JSO. Smack appears to be client only so while it might help parsing packets it doesn't know how to respond to clients. Is JSO maintained it looks very old. The only promising avenue is to pull apart Openfire which is an entire commercial (OSS) XMPP server. I was just hoping for a few lines of code on top of Netty or Mina, so I could get started processing some messages off the wire.

    Read the article

  • Android MapView High Res tiles?

    - by Nemi
    I have created a simple map app that shows the local garage sales in my town. It was simple enough to create and I parse out the Google Maps (html) xml doc that the local paper uses on their google maps mashup on their website. I have noticed that the Map Tiles used in the MapView class are not as high-res as the tiles used in the real Android Google Maps app (I have a Droid). I have searched the docs and google, but I cannot find any info on this. Actually, I can't even find anyone mentioning this, which surprises me and makes me think I am missing something simple. Is this a case of Google making the APIs available, but at a lower quality, or am I missing some setting that enables higher quality map tiles?

    Read the article

  • how can multiple trailing slashes can be removed from an url in Ruby

    - by splintercell
    Hello, What i'm trying to achieve here is lets say we have two example urls: url1 "http://emy.dod.com/kaskaa/dkaiad/amaa//////////" & url2 = "http://www.example.com/". How can I extract the striped down urls? url1 : "http://emy.dod.com/kaskaa/dkaiad/amaa" & url2 to "http://http://www.example.com"? URI.parse in ruby sanitizes certain type of malformed url but is ineffective in this case. If we use regex then /^(.*)\/$/ removes a single slash (/) from url1 & is ineffective for url2. Is anybody aware of how to handle this type of url parsing? The point here is I dont want my system to have "http://www.example.com/" & "http://www.example.com" being treated as two different urls. And same goes for "http://emy.dod.com/kaskaa/dkaiad/amaa////" & "http://emy.dod.com/kaskaa/dkaiad/amaa/" cheers, -dg

    Read the article

  • How to output floating point numbers with a custom output format in C++?

    - by Victor Liu
    The problem is that I want to output Mathematica compatible floating point numbers. The only difference with the standard IOStream or printf output format is that the exponential e is replaced by *^: Standard C/C++ output format: 1.23e-4 Mathematica format: 1.23*^-4 Is there a way to manipulate streams to achieve this effect? My original idea was just to use istringstream and dump it to a string and then replace all the e's. I would also be okay if someone posted code to parse through the bits of the floating point number and output it directly (i.e. a printf("%e") replacement).

    Read the article

  • string parsing help

    - by sprugman
    I've got a string like this: #################### Section One #################### Data A Data B #################### Section Two #################### Data C Data D etc. I want to parse it into something like: $arr( 'Section One' => array('Data A', 'Data B'), 'Section Two' => array('Data C', 'Data D') ) At first I tried this: $sections = preg_split("/(\r?\n)(\r?\n)#/", $file_content); The problem is, the file isn't perfectly clean: sometimes there are different numbers of blank lines between the sections, or blank spaces between data rows. The section head pattern itself seems to be relatively consistent: #################### Section Title #################### The number of #'s is probably consistent, but I don't want to count on it. The white space on the title line is pretty random. Once I have it split into sections, I think it'll be pretty straightforward, but any help writing a killer reg ex to get it there would be appreciated. (Or if there's a better approach than reg ex...)

    Read the article

  • How can I access variables outside of current scope in javascript?

    - by sekmet64
    I'm writing some application in javascript and cannot figure it out how to access the variables declared in my function, inside this jquery parse. Inside I can access global variables, but I don't really want to create global vars for these values. Basically I want to extract file names from an xml document in the simulationFiles variable. I check if the node attribute is equal with the simName and extract the two strings inside the xml elements, that part I think it's working. How can I extract those xml elements and append them to local variables? function CsvReader(simName) { this.initFileName = "somepath"; this.eventsFileName = "somepath"; $(simulationFiles).find('simulation').each(function() { if ($(this).attr("name") == simName) { initFileName += $(this).find("init").text(); eventsFileName += $(this).find("events").text(); } }); }

    Read the article

  • Batch file search and replace using wildcards?

    - by user329358
    Batch file search and replace using wildcards I have a html (txt) file I am using as a template or sourcefile to create further html files.  Filename = pg_0001.htm and it contains a line of code thus: pg_0001.jpg I want to parse the pg_0001.htm sourcefile, increment and replace the jpeg string, like this:  "pg_0002.jpg", and then output the edited htm file to a new filename pg_0002.htm I then take each newly created file (pg_0002.htm, pg_0003.htm etc) as the sourcefile and repeat the processing until I have reached my target goal (let's say 100 newly created htm files containing code to display the corresponding jpeg. It must be done this way (fileX.htm containing fileX.jpg) because there is other javascript that uses these incremented filenames as function input. I used to know how to write incrementing batch files many years ago but I'm old & very rusty now.  Can anyone please help me do this?  Many thanks in advance. regards Harry

    Read the article

  • How do i style a JSON feed in my view?

    - by stephenmurdoch
    My rails app gets the following JSON feed from mixcloud and sticks the results into my index page At the moment when I do this, the entire contents of my feed are displayed unformatted in one big blob of scary looking text (without the curly JSON brackets) I only want to display specific values from the feed in the view. From the feed in question lets say for simplicity that I just wanted to display all values with a key of "url" In case I'm doing something wrong here's my code: # podcast controller def index # I'm using a class method to get the feed @feed = Podcast.feed end # podcast model def self.feed feed = JSON.parse(open("http://api.mixcloud.com/alivefrommaryhill/feed").read) end # index.html.haml .feed = @feed I can't figure out how to style the results and display only certain items from the feed. Is my approach wrong?

    Read the article

  • Configuring PHP in IIS with Tomcat

    - by Silent Walker
    I have my Java site running under IIS 7. I need to install wordpress blog in it. I've installed and configured PHP in IIS. I have tested the PHP handler by creating a separate site, everything works fine, phpinfo() gives the desired output. However, I'm having problem running the PHP files inside my Java web application. I've put my test PHP file inside a folder called blog. When I access this folder in the browser as /mysite/blog I get a 404 page from my Java application. When I try to invoke the php page directly, http://mysite/blog/index.php, I get an unprocessed php page. I'm using isapi_handler for the reidrects. How do I tell my isapi_handler to ignore /blog folder? In my IIS handler mapping, *.php is mapped with Fast CGI. I'm not sure how to approach this problem and any help on this would be much appreciated. Thanks in advance.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >