Search Results

Search found 57458 results on 2299 pages for 'http response codes'.

Page 256/2299 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Spring MVC 3.0: Avoiding explicit JAXBElement<> wrapper in method arg

    - by Keith Myers
    I have the following method and want to avoid having to explicitly show the JAXBElement< syntax. Is there some sort of annotation that would allow the method to appear to accept raw MessageResponse objects but in actuality work the same as shown below? I'm not sure how clear that was so I'll say this: I'm looking for some syntactic sugar :) @ServiceActivator public void handleMessageResponse(JAXBElement<MessageResponse> jaxbResponse) { MessageResponse response = jaxbResponse.getValue(); MessageStatus status = messageStatusDao.getByStoreIdAndMessageId(response.getStoreId(), response.getMessageId()); status.setStatusTimestamp(response.getDate()); status.setStatus("Complete"); }

    Read the article

  • how to get entire document in scrapy using hxs.select

    - by Chris Smith
    I've been at this for 12hrs and I'm hoping someone can give me a leg up. Here is my code all I want is to get the anchor and url of every link on a page as it crawls along. from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.utils.url import urljoin_rfc from scrapy.utils.response import get_base_url from urlparse import urljoin #from scrapy.item import Item from tutorial.items import DmozItem class HopitaloneSpider(CrawlSpider): name = 'dmoz' allowed_domains = ['domain.co.uk'] start_urls = [ 'http://www.domain.co.uk' ] rules = ( #Rule(SgmlLinkExtractor(allow='>example\.org', )), Rule(SgmlLinkExtractor(allow=('\w+$', )), callback='parse_item', follow=True), ) user_agent = 'Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))' def parse_item(self, response): #self.log('Hi, this is an item page! %s' % response.url) hxs = HtmlXPathSelector(response) #print response.url sites = hxs.select('//html') #item = DmozItem() items = [] for site in sites: item = DmozItem() item['title'] = site.select('a/text()').extract() item['link'] = site.select('a/@href').extract() items.append(item) return items What I'm doing wrong... my eyes hurt now.

    Read the article

  • Using a Filter to serve a specific page?

    - by user246114
    Hi, I am using a class which implements Filter for my jsp stuff. It looks like this: public class MyFilter implements Filter { public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { request.getRequestDispatcher("mypage.jsp").forward(request, response); } } So the target, "mypage.jsp", is just sitting in my top-level directory. The filter works fine if I'm entering urls like: http://www.mysite.com/foo http://www.mysite.com/boo but if I enter a trailing slash, I'll get a 404: http://www.mysite.com/foo/ http://www.mysite.com/boo/ HTTP ERROR: 404 /foo/mypage.jsp RequestURI=/foo/mypage.jsp it seems if I enter the trailing slash, then the filter thinks I want it to look for mypage.jsp in subfolder foo or boo, but I really always want it to just find it at: http://www.mysite.com/mypage.jsp how can I do that? Thank you

    Read the article

  • envets is not displayed on my fullcalendar

    - by ChangJiu
    Hi BalusC! I have used your method at above in my servelt. [CalendarMap] public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { Map map = new HashMap(); map.put("id", 115); map.put("title", "changjiu"); map.put("start", new SimpleDateFormat("yyyy-MM-15").format(new Date())); map.put("url", "http://yahoo.com/"); // Convert to JSON string. String json = new Gson().toJson(map); // Write JSON string. response.setContentType("application/json"); response.setCharacterEncoding("UTF-8"); response.getWriter().write(json); } I want to display it my fullcalendar as follow. $(document).ready(function() { $('#calendar').fullCalendar({ eventSources: [ "CalendarMap" ] }); }); but it's not worked! Can you help me? thank you!

    Read the article

  • Writing an image to ResponseBase.OutputStream does not work anymore with ASP.NET MVC2 RC2

    - by labilbe
    The following code worked nice on ASP.NET MVC1 public class ImageResult : ActionResult { public Image Image { get; set; } public override void ExecuteResult(ControllerContext context) { if (Image == null) { return; } HttpResponseBase response = context.HttpContext.Response; response.ContentType = "image/png"; Image.Save(response.OutputStream, ImageFormat.Png); } } I spent some time searching answers but I didn't find anyone. The error thrown is OutputStream is not available when a custom TextWriter is used.

    Read the article

  • Client Web Browser Behavior When Handling 301 Redirect

    - by Jon Swanson
    The RFC seems to suggest that the client should permanently cache the response: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. I'm having a hard time finding concrete browser documentation for any major browser that states how they handle these. I've started digging through the source code of firefox, but quickly got lost. Is the following scenario true for which (if any) browsers, and is there definitive documentation for either Firefox or IE that states as much?: First Time Around: 1.1: User enters link to site A, or clicks on a link directed at Site A 1.2: Browser interprets link at Site A, first time, no cache. Sends GET to Site A. 1.2: Site A responds with 301 Redirect to Site B 1.3: Browser sends GET to Site B. Any Subsequent Times Around: 2.2: User clicks on a link directed at Site A 2.2: Browser sees that, due to a past 301 redirect, Site A should now be Site B. 2.3: Without initiating any request whatsoever at Site A, browser initiates GET at Site B.

    Read the article

  • Getting error 400 / 404 - HttpUtility.UrlEncode not encoding full string?

    - by Justin808
    Why do the following URLs give me the IIS errors below: A) http://192.168.1.96/cms/View.aspx/Show/Small+test' A2) http://192.168.1.96/cms/View.aspx/Show/Small%20test' <-- this works, but is not the result from HttpUtility.UrlEncode() B) http://192.168.1.96/cms/View.aspx/Show/'%26$%23funky**!!~''+page Error for A: HTTP Error 404.11 - Not Found The request filtering module is configured to deny a request that contains a double escape sequence. Error for B: HTTP Error 400.0 - Bad Request ASP.NET detected invalid characters in the URL. The last part of the URL after /Show/ is the result after the text is being sent through HttpUtility.UrlEncode() so, according to Microsoft it is URL Encoded correctly. If I user HttpUtility.UrlPathEncode() rather than HttpUtility.UrlEncode() I get the A2 results. But B ends up looking like: http://192.168.1.96/TVCMS-CVJZ/cms/View.aspx/Show/'&$#funky**!!~''%20page which is still wrong. Does Microsoft know how to URL Encode at all? Is there a function someone has written up to do it the correct way?

    Read the article

  • How to get elements from an object in C#?

    - by Drew
    I am using the AutoCompleteBox in WPF, I populate the suggestions with a List that consists of four fields. When the user selects an item and I reach my eventHandler, i can see that MyAutoCompleteBox.SelectedItem is an object that has my four values, if i hover this text in the debugger i can see the four values listed, however i don't know how to access these values in the code. I tried List<Codes> selected = MyAutoCompleteBox.SelectedItem as List<Codes>; where Codes is my List. selected returns as null and empty every time. Is there a way to get to these values? Thanks!

    Read the article

  • Why doesn't this code work correctly?

    - by MisterSir
    I'm working on a website that displays galleries, using jCarousel. But no matter what I try, I can't get it to work, and I need to finish this by today. I have a very urgent schedule. My code basically takes image URLs from a database and sends them to AJAX, which passes it to jCarousel which makes the gallery. But there are a few problems: It doesn't display correctly! I can only get the last item pulled from the database, and it displays on the bottom-most row. After the item pulled from the database is displayed, the first time I click on "prev" there's no scroll effect, and the item just disappears! Only if I click on "next" 2-3 times there's a scroll effect and the item remains visible. My items are always displayed at the end of the carousel! This is urgent.. Please help me fix this. about.html: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-us"> <head> <script type="text/javascript" src="jquery-1.4.4.min.js"></script> <script type="text/javascript" src="/lib/jquery.jcarousel.min.js"></script> <link rel="stylesheet" type="text/css" href="/skins/tango/skin.css" /> <!--<style type="text/css"> #wrapper { width: 700px; margin-left: auto; margin-right: auto; } #carousel { margin-top: 120px; padding-left: 120px; } #side { padding-left: 550px; position: absolute; padding-top: 120px; } #hidden { color: #FFFFFF; } </style>--> <script type="text/javascript"> jQuery.easing['BounceEaseOut'] = function(p, t, b, c, d) { if ((t/=d) < (1/2.75)) { return c*(7.5625*t*t) + b; } else if (t < (2/2.75)) { return c*(7.5625*(t-=(1.5/2.75))*t + .75) + b; } else if (t < (2.5/2.75)) { return c*(7.5625*(t-=(2.25/2.75))*t + .9375) + b; } else { return c*(7.5625*(t-=(2.625/2.75))*t + .984375) + b; } }; function mycarousel_initCallback(carousel) { jQuery('#mycarousel-next').bind('click', function() { carousel.next(); return false; }); jQuery('#mycarousel-prev').bind('click', function() { carousel.prev(); return false; }); }; jQuery(document).ready(function() { jQuery('#mycarousel').jcarousel({ easing: 'BounceEaseOut', wrap: "first", initCallback: mycarousel_initCallback, animation: 1000, scroll: 3, visible: 3, buttonNextHTML: null, buttonPrevHTML: null }); jQuery('#mycarousel2').jcarousel({ easing: 'BounceEaseOut', animation: 1000, wrap: "first", initCallback: mycarousel_initCallback, scroll: 3, visible: 3, buttonNextHTML: null, buttonPrevHTML: null }); jQuery('#mycarousel3').jcarousel({ easing: 'BounceEaseOut', animation: 1000, scroll: 3, wrap: "first", initCallback: mycarousel_initCallback, visible: 3, buttonNextHTML: null, buttonPrevHTML: null }); }); var prevButton = null; function getObject(b, el) { var currbutton = b; var http; var url = "about.php"; var parameters = "d=carousel&cat=" + currbutton; try { http = new XMLHttpRequest(); } catch(e) { try { http = new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { http = new ActiveXObject("Microsoft.XMLHTTP"); } } function getServer() { if (http.readyState == 4) { var i = 0; var liArr = http.responseText; var built = liArr.split(", "); var li = document.createElement("li"); var ul1 = document.getElementById("mycarousel"); var ul2 = document.getElementById("mycarousel2"); var ul3 = document.getElementById("mycarousel3"); if (el != prevButton) { prevButton = el; while (ul1.hasChildNodes() ) {ul1.removeChild(ul1.lastChild);} while (ul2.hasChildNodes() ) {ul2.removeChild(ul2.lastChild);} while (ul3.hasChildNodes() ) {ul3.removeChild(ul3.lastChild);} } else return 0; while (i < (built.length) / 3) { li.innerHTML = built[i]; ul1.appendChild(li); i++; } while (i < ((built.length) / 3)*2) { li.innerHTML = built[i]; ul2.appendChild(li); i++; } while (i < (built.length)) { li.innerHTML = built[i]; ul3.appendChild(li); i++; } } } http.open("POST", url, true); http.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); http.setRequestHeader("Content-length", parameters.length); http.setRequestHeader("Connection", "close"); http.onreadystatechange = getServer; http.send(parameters); } </script> </head> <body> <span id="hidden"> </span> <div id="wrapper"> <div id="side"> <form name="cats"> <input type="button" value="Hats" onclick="getObject('hats', this);"/><br /> <input type="button" value="Pants" onclick="getObject('pants', this);"/><br /> <input type="button" value="Shirts" onclick="getObject('shirts', this);"/><br /> </form> </div> <div id="carousel"> <ul id="mycarousel" class="jcarousel-skin-tango"> </ul> <ul id="mycarousel2" class="jcarousel-skin-tango"> </ul> <ul id="mycarousel3" class="jcarousel-skin-tango"> </ul> <input type="button" id="mycarousel-prev" value="prev" /> <input type="button" id="mycarousel-next" value="next" /> </div> </div> </body> </html> I commented the CSS because I thought it was giving me trouble, but honestly I have no idea what the hell's going on with jCarousel. about.php: <?php echo "<img width='75' height='75' src='http://static.flickr.com/66/199481236_dc98b5abb3_s.jpg' />, hi, hi, hi, hi, hi, hi, hi, hi"; ?> Also, even if there are no other items than what is displayed, I'm still able to scroll back, but not forward, assumingly because my item is always placed at the end of the carousel. I know it looks like a lot of code but it's really not! My formatting takes a lot of lines, the commented CSS takes a lot, and a lot of the code is HTML and jCarousel configuration, and there's also the BounceEasing effect which takes a few lines. There's not much actual code! So as I said, this is urgent and I need this fixed. But I can't get it to work. Please help me! Thanks for your time! EDIT: I changed the code a bit, but it still does not work. I really need help on this one!! EDIT: I added document.createElement("li"); to each while loop. Now all my items are displayed, but they are displayed vertically and not horizontally on each row. Other than that all other problems are the same. EDIT: Oh and also, in the row my image displays, only the image is there. Maybe jCarousel doesn't accept img and text, I don't know.

    Read the article

  • Am I using handlers in the wrong way?

    - by superexsl
    Hey, I've never used HTTP Handlers before, and I've got one working, but I'm not sure if I'm actually using it properly. I have generated a string which will be saved as a CSV file. When the user clicks a button, I want the download dialog box to open so that the user can save the file. What I have works, but I keep reading about modifying the web.config file and I haven't had to do that. My Handler: private string _data; private string _title = "temp"; public void AddData(string data) { _data = data; } public bool IsReusable { get { return false; } } public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/csv"; context.Response.AddHeader("content-disposition", "filename=" + _title + ".csv"); context.Response.Write(_data); context.Response.Flush(); context.Response.Close(); } And this is from the page that allows the user to download: (on button click) string dataToConvert = "MYCSVDATA...."; csvHandler handler = new csvHandler(); handler.AddData(dataToConvert); handler.ProcessRequest(this.Context); This works fine, but no examples I've seen ever instantiate the handler and always seem to modify the web.config. Am I doing something wrong? Thanks

    Read the article

  • PHP pathinfo gets fooled by url in quert string. Any workaround?

    - by Majid
    I am working on a small function to take in a url and return a relative path based on where it resides itself. If the url contains a path in the query string, pathinfo returns incorrect results. This is demonstrated by the code below: $p = 'http://localhost/demos/image_editor/dir_adjuster.php?u=http://localhost/demos/some/dir/afile.txt'; $my_path_info = pathinfo($p); echo $p . '<br/><pre>'; print_r($my_path_info); echo '</pre>'; That code outputs: http://localhost/demos/image_editor/dir_adjuster.php?u=http://localhost/demos/some/dir/afile.txt Array ( [dirname] => http://localhost/demos/image_editor/dir_adjuster.php?u=http://localhost/demos/some/dir [basename] => afile.txt [extension] => txt [filename] => afile ) Which obviously is wrong. Any workaround?

    Read the article

  • accessing pdf via https URL

    - by Paul
    I send out a newsletter email containing URLs to a https website that then redirects to a pdf document. On first invocation of a URL the user is prompted with the typical https browser "security alert" popup, on selecting "Yes" the display of the PDF fails. The HTTP Header on the failed response is: HTTP/1.1 200 OK Server: ECS/HTTP-Server Date: Tue, 16 Mar 2010 15:57:26 GMT Content-type: application/pdf Content-language: en-US Set-cookie: JSESSIONID=0000r111cRz1Vc-PtCJg8Cdu4eR:-1; Path=/ Expires: Thu, 01 Dec 1994 16:00:00 GMT Cache-control: no-cache="set-cookie, set-cookie2" Connection: close Subsequent invocations of the URL successfully opens the PDF (at this point we have the session id cookie set by the initial failed request). The HTTP Header on the successful response is: HTTP/1.1 200 OK Server: ECS/HTTP-Server Date: Tue, 16 Mar 2010 16:53:03 GMT Content-type: application/pdf Content-language: en-US Connection: close The email client is Lotus Notes 6.5 which launches an IE6 browser Any ideas?

    Read the article

  • P/Invoke a Function Passed a StringBuilder

    - by andrew
    in a C# file i have a class Archiver { [DllImport("Archiver.dll")] public static extern void archive(string data, StringBuilder response); } string data is an input, and StringBuilder response is where the function writes something the archive function prototype (written in C) looks like this: void archive(char * dataChr, char * outChr); and it receives a string in dataChr, and then does a strcpy(outChr,"some big text"); from C# i call it something like this: string message = "some text here"; StringBuilder response = new StringBuilder(10000); Archiver.archive(message,response); this works, but the problem, as you might see is that i give a value to the StringBuilder size, but the archive function might give back a (way) larger text than the size i've given to my StringBuilder. any way to fix this?

    Read the article

  • get xml attribute named xlink:href using xsl

    - by awe
    How can I get the value of an attribute called xlink:href of an xml node in xsl template? I have this xml node: <DCPType> <HTTP> <Get> <OnlineResource test="hello" xlink:href="http://localhost/wms/default.aspx" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:type="simple" /> </Get> </HTTP> </DCPType> When I try the following xsl, I get an error saying "Prefix 'xlink' is not defined." : <xsl:value-of select="DCPType/HTTP/Get/OnlineResource/@xlink:href" /> When I try this simple attribute, it works: <xsl:value-of select="DCPType/HTTP/Get/OnlineResource/@test" />

    Read the article

  • What's a reasonable number of rows and tables to be able to join in MySQL?

    - by Philip Brocoum
    I have one table that maps locations to postal codes. For example, New York State has about 2000 postal codes. I have another table that maps mail to the postal codes it was sent to, but this table has about 5 million rows. I want to find all the mail that was sent to New York State, which seems simple enough, but the query is unbelievably slow. I haven't been able to even wait long enough for it to finish. Is the problem that there are 5 million rows? I can't help but think that 5 million shouldn't be such a large number for a computer these days... Oh, and everything is indexed. Is SQL just not designed to handle such large joins?

    Read the article

  • I'm using the correct content type & Headers so Why is FireFox saving Zip Files without extensions

    - by The_AlienCoder
    Users on my site have the option to download all the photos in an album as a zip file.The Zip file is dynamically created and saved to Response.OutPutStream to be detected as a file download on the user's browser. Here is the Header and Content-type I am outputing context.Response.AddHeader("Content-Disposition", "attachment; filename=Photos.zip"); context.Response.ContentType = "application/x-zip-compressed"; ..Well everything works fine with every browser except FireFox. Although Firefox correctly detects the download as a Zip file, It saves the file without the .zip extension. I thought adding this header context.Response.AddHeader("Content-Disposition", "attachment; filename=Photos.zip"); ..is supposed to force FF to save the extension. I believe I am following the correct protocol so why is FF behaving this way and how do I fix this?

    Read the article

  • Tricking the server to load files faster?

    - by Yongho
    If we have a website with multiple images and videos, I've read that it's best to serve them from other domains so that the browser can simultaneously download a bunch of files, rather than waiting one by one for each file to be downloaded. For example, if we have a website http://example.com/, we might consider serving: Videos from http://video.example.com/ Images from http://images.example.com/ etc. Question: can we achieve the simultaneous downloading by tricking the browser into believing that the files are hosted there, or do they actually need to be at that location? We can, for example, pretend to serve video from http://video.example.com/ when actually it's just a clever htaccess rewrite that ACTUALLY serves from http://example.com/video.php. In this case, the video is being served from the main domain but because we refer it as http://video.example.com/, it may think that it's another domain and thus load files simultaneously, rather than one by one. Is this feasible?

    Read the article

  • Apache HttpClient 4.0. Weird behavior.

    - by Mikhail T
    Hello. I'm using Apache HttpClient 4.0 for my web crawler. The behavior i found strange is: i'm trying to get page via HTTP GET method and getting response about 404 HTTP error. But if i try to get that page using browser it's done successfully. Details: 1. I upload multipart form to server this way: HttpPost httpPost = new HttpPost("http://[host here]/in.php"); MultipartEntity entity = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE); entity.addPart("method", new StringBody("post")); entity.addPart("key", new StringBody("223fwe0923fjf23")); FileBody fileBody = new FileBody(new File("photo.jpg"), "image/jpeg"); entity.addPart("file", fileBody); httpPost.setEntity(entity); HttpResponse response = httpClient.execute(httpPost); HttpEntity result = response.getEntity(); String responseString = ""; if (result != null) { InputStream inputStream = result.getContent(); byte[] buffer = new byte[1024]; while(inputStream.read(buffer) > 0) responseString += new String(buffer); result.consumeContent(); } Uppload succefully ends. I'm getting some results from web server: HttpGet httpGet = new HttpGet("http://[host here]/res.php?key="+myKey+"&action=get&id="+id); HttpResponse response = httpClient.execute(httpGet); HttpEntity entity = response.getEntity(); I'm getting ClientProtocolException while execute method run. I was debugging this situation with log4j. Server answers "404 Not Found". But my browser loads me that page with no problem. Can anybody help me? Thank you.

    Read the article

  • JSON: How do I alert the name of the first node in level 1 in this code snippet?

    - by user143805
    var response = "{\"tree\":[{\"level1\":[{\"node\":{\"id\": 1,\"name\": \"paradox\",\"parent\": 0}}]},{\"level2\":[{\"node\":{\"id\": 2,\"name\": \"lucent\",\"parent\": 1}},{\"node\":{\"id\": 3,\"name\": \"reiso\",\"parent\": 1}}]},{\"level3\":[{\"node\":{\"id\": 4,\"name\": \"pessi\",\"parent\": 3}},{\"node\":{\"id\": 5,\"name\": \"misho\",\"parent\": 2}}]},{\"level4\":[{\"node\":{\"id\": 6,\"name\": \"hema\",\"parent\": 5}},{\"node\":{\"id\": 7,\"name\": \"iiyo\",\"parent\": 4}}]}]}"; var data = eval("(" + response + ")"); This is a dummy json response I am currently testing. Now how do I get the value of "name" in the 1st node of "level1" from the "tree"? Thanks

    Read the article

  • libcurl - unable to download a file

    - by marmistrz
    I'm working on a program which will download lyrics from sites like AZLyrics. I'm using libcurl. It's my code lyricsDownloader.cpp #include "lyricsDownloader.h" #include <curl/curl.h> #include <cstring> #include <iostream> #define DEBUG 1 ///////////////////////////////////////////////////////////////////////////// size_t lyricsDownloader::write_data_to_var(char *ptr, size_t size, size_t nmemb, void *userdata) // this function is a static member function { ostringstream * stream = (ostringstream*) userdata; size_t count = size * nmemb; stream->write(ptr, count); return count; } string AZLyricsDownloader::toProviderCode() const { /*this creates an url*/ } CURLcode AZLyricsDownloader::download() { CURL * handle; CURLcode err; ostringstream buff; handle = curl_easy_init(); if (! handle) return static_cast<CURLcode>(-1); // set verbose if debug on curl_easy_setopt( handle, CURLOPT_VERBOSE, DEBUG ); curl_easy_setopt( handle, CURLOPT_URL, toProviderCode().c_str() ); // set the download url to the generated one curl_easy_setopt(handle, CURLOPT_WRITEDATA, &buff); curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, &AZLyricsDownloader::write_data_to_var); err = curl_easy_perform(handle); // The segfault should be somewhere here - after calling the function but before it ends cerr << "cleanup\n"; curl_easy_cleanup(handle); // copy the contents to text variable lyrics = buff.str(); return err; } main.cpp #include <QString> #include <QTextEdit> #include <iostream> #include "lyricsDownloader.h" int main(int argc, char *argv[]) { AZLyricsDownloader dl(argv[1], argv[2]); dl.perform(); QTextEdit qtexted(QString::fromStdString(dl.lyrics)); cout << qPrintable(qtexted.toPlainText()); return 0; } When running ./maelyrica Anthrax Madhouse I'm getting this logged from curl * About to connect() to azlyrics.com port 80 (#0) * Trying 174.142.163.250... * connected * Connected to azlyrics.com (174.142.163.250) port 80 (#0) > GET /lyrics/anthrax/madhouse.html HTTP/1.1 Host: azlyrics.com Accept: */* < HTTP/1.1 301 Moved Permanently < Server: nginx/1.0.12 < Date: Thu, 05 Jul 2012 16:59:21 GMT < Content-Type: text/html < Content-Length: 185 < Connection: keep-alive < Location: http://www.azlyrics.com/lyrics/anthrax/madhouse.html < Segmentation fault Strangely, the file is there. The same error is displayed when there's no such page (redirect to azlyrics.com mainpage) What am I doing wrong? Thanks in advance EDIT: I made the function for writing data static, but this changes nothing. Even wget seems to have problems $ wget http://www.azlyrics.com/lyrics/anthrax/madhouse.html --2012-07-06 10:36:05-- http://www.azlyrics.com/lyrics/anthrax/madhouse.html Resolving www.azlyrics.com... 174.142.163.250 Connecting to www.azlyrics.com|174.142.163.250|:80... connected. HTTP request sent, awaiting response... No data received. Retrying. Why does opening the page in a browser work and wget/curl not? EDIT2: After adding this: curl_easy_setopt(handle, CURLOPT_FOLLOWLOCATION, 1); The log is: * About to connect() to azlyrics.com port 80 (#0) * Trying 174.142.163.250... * connected * Connected to azlyrics.com (174.142.163.250) port 80 (#0) > GET /lyrics/anthrax/madhouse.html HTTP/1.1 Host: azlyrics.com Accept: */* < HTTP/1.1 301 Moved Permanently < Server: nginx/1.0.12 < Date: Fri, 06 Jul 2012 09:09:47 GMT < Content-Type: text/html < Content-Length: 185 < Connection: keep-alive < Location: http://www.azlyrics.com/lyrics/anthrax/madhouse.html < * Ignoring the response-body * Connection #0 to host azlyrics.com left intact * Issue another request to this URL: 'http://www.azlyrics.com/lyrics/anthrax/madhouse.html' * About to connect() to www.azlyrics.com port 80 (#1) * Trying 174.142.163.250... * connected * Connected to www.azlyrics.com (174.142.163.250) port 80 (#1) > GET /lyrics/anthrax/madhouse.html HTTP/1.1 Host: www.azlyrics.com Accept: */* < HTTP/1.1 200 OK < Server: nginx/1.0.12 < Date: Fri, 06 Jul 2012 09:09:47 GMT < Content-Type: text/html < Transfer-Encoding: chunked < Connection: keep-alive < Segmentation fault

    Read the article

  • [grails] setting cookies when render type is "contentType: text/json"

    - by Robin Jamieson
    Is it possible to set cookies on response when the return render type is set as json? I can set cookies on the response object when returning with a standard render type and later on, I'm able to get it back on the subsequent request. However, if I were to set the cookies while rendering the return values as json, I can't seem to get back the cookie on the next request object. What's happening here? These two actions work as expected with 'basicForm' performing a regular form post to the action, 'withRegularSubmit', when the user clicks submit. // first action set the cookie and second action yields the originally set cookie def regularAction = { // using cookie plugin response.setCookie("username-regular", "regularCookieUser123",604800); return render(view: "basicForm"); } // called by form post def withRegularSubmit = { def myCookie = request.getCookie("username-regular"); // returns the value 'regularCookieUser123' return render(view: "resultView"); } When I switch to setting the cookie just before returning from the response with json, I don't get the cookie back with the post. The request starts by getting an html document that contains a form and when doc load event is fired, the following request is invoked via javascript with jQuery like this: var someUrl = "http://localhost/jsonAction"; $.get(someUrl, function(jsonData) { // do some work with javascript} The controller work: // this action is called initially and returns an html doc with a form. def loadJsonForm = { return render(view: "jsonForm"); } // called via javascript when the document load event is fired def jsonAction = { response.setCookie("username-json", "jsonCookieUser456",604800); // using cookie plugin return render(contentType:'text/json') { 'pair'('myKey': "someValue") }; } // called by form post def withJsonSubmit = { def myCookie = request.getCookie("username-json"); // got null value, expecting: jsonCookieUser456 return render(view: "resultView"); } The data is returned to the server as a result of the user pressing the 'submit' button and not through a script. Prior to the submit of both 'withRegularSubmit' and 'withJsonSubmit', I see the cookies stored in the browser (Firefox) so I know they reached the client.

    Read the article

  • Publish to Current user's wall using FBJS in FBML application

    - by Damodaran
    Hi I need to publish some message to the current user's wall using FBJS in FBML application. When I use window.fbAsyncInit = function() { FB.init({appId: 'MY_APP_ID', status: true, cookie: true, xfbml: false}); }; i am getting an error :- FB is not defined. and window is not defined. For publishing i am using this code function graphStreamPublish(){ var body = document.getElementById("txtTextToPublish").value; FB.api('/me/feed', 'post', { message: body }, function(response) { if (!response || response.error) { // alert('Error occured'); } else { // alert('Post ID: ' + response.id); } }); } (I cannot use alert in facebook..) Thanks in advance..

    Read the article

  • Authenticate using openID without login through provider

    - by thabet084
    Dear all , I create web application to connect to MySpace Offsite App and I want to authenticate I used the following code var openid = new OpenIdRelyingParty(); IAuthenticationRequest request = openid.CreateRequest("http://www.myspace.com/thabet084"); request.AddExtension(new OAuthRequest("OAuthConsumerKey")); request.RedirectToProvider(); var response = openid.GetResponse(); OAuthResponse oauthExtension = new OAuthResponse(); if (response != null) { switch (response.Status) { case AuthenticationStatus.Authenticated: oauthExtension = response.GetExtension<OAuthResponse>(); var user_authorized_request_token = oauthExtension.RequestToken; break; } } OffsiteContext context = new OffsiteContext("ConsumerKey", "ConsumerSecret"); var accessToken = (AccessToken)context.GetAccessToken(oauthExtension.RequestToken, "", ""); and I used the following refrences DotNetOpenAuth.dll and MySpaceID.SDK.dll My problems are: I always found that responce=null I don't need user to login through provider MySpace so i need to remove RedirectToProvider(); My application in brief is to send status from mywebsite to MySpace account Just click on button to send All ideas are welcome BR, Mohammed Thabet Zaky

    Read the article

  • Is there a way to implement Caliburn-like co-routines in VB.NET since there's no yield keyword

    - by Miroslav Popovic
    Note that I'm aware of other yield in vb.net questions here on SO. I'm playing around with Caliburn lately. Bunch of great stuff there, including co-routines implementation. Most of the work I'm doing is C# based, but now I'm also creating an architecture guideline for a VB.NET only shop, based on Rob's small MVVM framework. Everything looks very well except using co-routines from VB. Since VB 10 is used, we can try something like Bill McCarthy's suggestion: Public Function Lines(ByVal rdr as TextReader) As IEnumerable(Of String) Return New GenericIterator(Of String) (Function(ByRef nextItem As String) As Boolean nextItem = rdr.ReadLine Return nextItem IsNot Nothing End Function) End Function I'm just failing to comprehend how a little more complex co-routine method like the one below (taken from Rob's GameLibrary) could be written in VB: public IEnumerable<IResult> ExecuteSearch() { var search = new SearchGames { SearchText = SearchText }.AsResult(); yield return Show.Busy(); yield return search; var resultCount = search.Response.Count(); if (resultCount == 0) SearchResults = _noResults.WithTitle(SearchText); else if (resultCount == 1 && search.Response.First().Title == SearchText) { var getGame = new GetGame { Id = search.Response.First().Id }.AsResult(); yield return getGame; yield return Show.Screen<ExploreGameViewModel>() .Configured(x => x.WithGame(getGame.Response)); } else SearchResults = _results.With(search.Response); yield return Show.NotBusy(); } Any idea how to achieve that, or any thoughts on using Caliburn co-routines in VB?

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >