Search Results

Search found 20873 results on 835 pages for 'url fetch'.

Page 340/835 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • How to get Content types

    - by Gaby
    Hi, I'm developing a windows application, that talks to SharePoint via its built in web services, and i want to get all content types available on a SharePoint site, I'm trying to use Web.Webs WebsService = new Web.Webs(); WebsService.Credentials=credentials; WebsService.Url="url of the web service"; XmlNode listOfContentTypes = WebsService.GetContentTypes(); If credentials have administrator privileges i can get the list of all the content types available, But if credentials don't have administrator privileges a 401 exception is thrown (not enought permission). My question is: How can i get all content types available on a SharePoint site if i don't have administrator priviliges?

    Read the article

  • xDebug cant find my files. Always looks in localhost

    - by sleeper
    I'm running win7-64bit, NetBeans 7.1.1 and WampServer 2.2 (which has xDebug) I've configured php.ini (xdebug.remote_enable=on, etc.) I create a directory (virtual host called example.dev) and add a test file. (c:/wamp/example/test-xdebug.php) I run debug in NetBeans and the following url displays: http://localhost/example/test-xdebug.php?XDEBUG_SESSION_START=netbeans-xdebug This fails. The browser coughs up the following error message. Not Found. The requested URL /example/test-xdebug.php was not found on this server. I add the correct path to the virtual host, and xDebug Runs Flawlessly: http://example.dev/test-xdebug.php?XDEBUG_SESSION_START=netbeans-xdebug Tried every configuration I could think of. If this is a php.ini config issue, I sure as heck cant find it. If its a NetBeans issue, there is not an option/interface to modify i (that I can find). Please illuminate! thanks sleeper

    Read the article

  • SIMPLE reverse geocoding using Nominatim

    - by tony gil
    i am developing an online mapping application using OpenLayers + OpenStreetMaps. i need help implementing a simple reverse geocoding function in javascript (or php) that receives Latitude and Longitude and returns an Address. i would like to work with Nominatim, if possible. i do NOT want to use Google, Bing or CloudMade or other proprietary solutions. this link returns a reasonable response and i used simple_html_dom.php to break down the result but it is sort of an ugly solution. <?php include('simple_html_dom.php'); $url = "http://nominatim.openstreetmap.org/reverse?format=xml&lat=-23.56320001&lon=-46.66140002&zoom=27&addressdetails=1"; $html = file_get_html($url); foreach ($html->find('road') as $element ) { echo $element; } ?> any suggestions of a more elegant solution?

    Read the article

  • How to add Category in DotClear blog with HttpWebRequest or MetaWeblog API

    - by Pitming
    I'm trying to create/modify dotclear blogs. For most of the options, i use XmlRpc API (DotClear.MetaWeblog). But didn't find any way to handle categories. So I start to look at the Http packet and try to do "the same as the browser". Here si the method I use to "Http POST" protected HttpStatusCode HttpPost(Uri url_, string data_, bool allowAutoRedirect_) { HttpWebRequest Request; HttpWebResponse Response = null; Stream ResponseStream = null; Request = (System.Net.HttpWebRequest)HttpWebRequest.Create(url_); Request.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; fr; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 (.NET CLR 3.5.30729)"; Request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; Request.AllowAutoRedirect = allowAutoRedirect_; // Add the network credentials to the request. Request.Credentials = new NetworkCredential(Username, Password); string authInfo = Username + ":" + Password; authInfo = Convert.ToBase64String(Encoding.Default.GetBytes(authInfo)); Request.Headers["Authorization"] = "Basic " + authInfo; Request.Method = "POST"; Request.CookieContainer = Cookies; if(ConnectionCookie!=null) Request.CookieContainer.Add(url_, ConnectionCookie); if (dcAdminCookie != null) Request.CookieContainer.Add(url_, dcAdminCookie); Request.PreAuthenticate = true; ASCIIEncoding encoding = new ASCIIEncoding(); string postData = data_; byte[] data = encoding.GetBytes(postData); //Encoding.UTF8.GetBytes(data_); //encoding.GetBytes(postData); Request.ContentLength = data.Length; Request.ContentType = "application/x-www-form-urlencoded"; Stream newStream = Request.GetRequestStream(); // Send the data. newStream.Write(data, 0, data.Length); newStream.Close(); try { // get the response from the server. Response = (HttpWebResponse)Request.GetResponse(); if (!allowAutoRedirect_) { foreach (Cookie c in Response.Cookies) { if (c.Name == "dcxd") ConnectionCookie = c; if (c.Name == "dc_admin") dcAdminCookie = c; } Cookies.Add(Response.Cookies); } // Get the response stream. ResponseStream = Response.GetResponseStream(); // Pipes the stream to a higher level stream reader with the required encoding format. StreamReader readStream = new StreamReader(ResponseStream, Encoding.UTF8); string result = readStream.ReadToEnd(); if (Request.RequestUri == Response.ResponseUri) { _log.InfoFormat("{0} ==&gt; {1}({2})", Request.RequestUri, Response.StatusCode, Response.StatusDescription); } else { _log.WarnFormat("RequestUri:{0}\r\nResponseUri:{1}\r\nstatus code:{2} Status descr:{3}", Request.RequestUri, Response.ResponseUri, Response.StatusCode, Response.StatusDescription); } } catch (WebException wex) { Response = wex.Response as HttpWebResponse; if (Response != null) { _log.ErrorFormat("{0} ==&gt; {1}({2})", Request.RequestUri, Response.StatusCode, Response.StatusDescription); } Request.Abort(); } finally { if (Response != null) { // Releases the resources of the response. Response.Close(); } } if(Response !=null) return Response.StatusCode; return HttpStatusCode.Ambiguous; } So the first thing to do is to Authenticate as admin. Here is the code: protected bool HttpAuthenticate() { Uri u = new Uri(this.Url); Uri url = new Uri(string.Format("{0}/admin/auth.php", u.GetLeftPart(UriPartial.Authority))); string data = string.Format("user_id={0}&user_pwd={1}&user_remember=1", Username, Password); var ret = HttpPost(url,data,false); return (ret == HttpStatusCode.OK || ret==HttpStatusCode.Found); } 3.Now that I'm authenticate, i need to get a xd_chek info (that i can find on the page so basically it's a GET on /admin/category.php + Regex("dotclear[.]nonce = '(.*)'")) 4.so I'm authenticate and have the xd_check info. The last thing to do seems to post the next category. But of course it does not work at all... here is the code: string postData = string.Format("cat_title={0}&new_cat_parent={1}&xd_check={2}", category_, 0, xdCheck); HttpPost(url, postData, true); If anyone can help me and explain were is it wrong ? thanks in advance.

    Read the article

  • Tinymce extended_valid_elements for BBcodes?

    - by Emily
    I use Tinymce with BBcodes plugin so the tags used in the editor are [B] [U] [I] [quote] [color]. If you are familiar with TinyMce there is a great option to filter all unwanted tags when pasting to the editor. Unfortunately i think this is not working for BBcodes mode, what i want is to remove any <TAGS> Or Other Unwanted BBcodes such as [url] & [img] So, my white list is : [B] [U] [I] [quote] [color=XXXXXX]. I'm already filtering in the server side, but i want this to be implemented in the client side too, So by example if someone copied from a webpage a mix of images and urls and formatted text, every image will be stripped and every url will be "unclickable" directly after pasting. Note: this is one of my failed attempts : extended_valid_elements : 'b,i,u,quote,color' Thanks

    Read the article

  • how to run fastcgi

    - by joels
    I have fastcgi installed and running. I downloaded a developerkit from fastcgi.com. It had some examples in it. One of the example files echos some stuff. It required a .libs and a .deps I put those folders along with a echo.fcgi file and into the webroot/cgi-bin. If I got to the echo.fcgi url, it works great. I created a simple c file that prints hello world. I compile it using gcc -Wall -o main -lfcgi main.c What do I do with it now? Does it require something like a perl script or php script to be executed. Or, should I just be able to put it in the webroot/cgi-bin folder and go to it's url?

    Read the article

  • Google Hybrid OpenID+OAuth with dotnetopenauth

    - by Max Favilli
    I have spent probably more than 10 hours in the last two days trying to understand how to implement user login with Google Hybrid OpenID+OAuth (Federated Login) To trigger the authorization request I use: InMemoryOAuthTokenManager tm = new InMemoryOAuthTokenManager( ConfigurationManager.AppSettings["googleConsumerKey"], ConfigurationManager.AppSettings["googleConsumerSecret"]); using (OpenIdRelyingParty openid = new OpenIdRelyingParty()) { Realm realm = HttpContext.Current.Request.Url.Scheme + Uri.SchemeDelimiter + ConfigurationManager.AppSettings["googleConsumerKey"] + "/"; IAuthenticationRequest request = openid.CreateRequest(identifier, Realm.AutoDetect, new Uri(HttpContext.Current.Request.Url.Scheme + "://" + HttpContext.Current.Request.Url.Authority + "/OAuth/google")); var authorizationRequest = new AuthorizationRequest { Consumer = ConfigurationManager.AppSettings["googleConsumerKey"], Scope = "https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/plus.me", }; request.AddExtension(authorizationRequest); request.AddExtension(new ClaimsRequest { Email = DemandLevel.Request, Gender = DemandLevel.Require }); request.RedirectToProvider(); } To retrieve the accesstoken I use: using (OpenIdRelyingParty openid = new OpenIdRelyingParty()) { IAuthenticationResponse authResponse = openid.GetResponse(); if (authResponse != null) { switch (authResponse.Status) { case AuthenticationStatus.Authenticated: HttpContext.Current.Trace.Write("AuthenticationStatus", "Authenticated"); FetchResponse fr = authResponse.GetExtension<FetchResponse>(); InMemoryOAuthTokenManager tm = new InMemoryOAuthTokenManager(ConfigurationManager.AppSettings["googleConsumerKey"], ConfigurationManager.AppSettings["googleConsumerSecret"]); ServiceProviderDescription spd = new ServiceProviderDescription { spd.RequestTokenEndpoint = new DotNetOpenAuth.Messaging.MessageReceivingEndpoint("https://accounts.google.com/o/oauth2/token", HttpDeliveryMethods.AuthorizationHeaderRequest | HttpDeliveryMethods.GetRequest); spd.AccessTokenEndpoint = new DotNetOpenAuth.Messaging.MessageReceivingEndpoint("https://accounts.google.com/o/oauth2/token", HttpDeliveryMethods.AuthorizationHeaderRequest | HttpDeliveryMethods.GetRequest); spd.UserAuthorizationEndpoint = new DotNetOpenAuth.Messaging.MessageReceivingEndpoint("https://accounts.google.com/o/oauth2/auth?access_type=offline", HttpDeliveryMethods.AuthorizationHeaderRequest | HttpDeliveryMethods.GetRequest); spd.TamperProtectionElements = new ITamperProtectionChannelBindingElement[] { new HmacSha1SigningBindingElement() }; WebConsumer wc = new WebConsumer(spd, tm); AuthorizedTokenResponse accessToken = wc.ProcessUserAuthorization(); if (accessToken != null) { HttpContext.Current.Trace.Write("accessToken", accessToken.ToString()); } else { } break; case AuthenticationStatus.Canceled: HttpContext.Current.Trace.Write("AuthenticationStatus", "Canceled"); break; case AuthenticationStatus.Failed: HttpContext.Current.Trace.Write("AuthenticationStatus", "Failed"); break; default: break; } } } Unfortunatelly I get AuthenticationStatus.Authenticated but wc.ProcessUserAuthorization() is null. What am I doing wrong? Thanks a lot for any help.

    Read the article

  • Tomcat 403 error after LDAP authentication.

    - by user352636
    I'm currently trying to use an LDAP server to authenticate users who are trying to access our Tomcat setup. I believe I have managed to get the LDAP authentication working in the form of a JNDI realm call from Tomcat, but immediately after the user enters their password Tomcat starts throwing 403 (permission denied) errors for everything except from the root page (ttp://localhost:1337/). I have no idea why this is happening. I am following the example at http://blog.mc-thias.org/?title=tomcat_ldap_authentication&more=1&c=1&tb=1&pb=1 . server.xml (the interesting/changed bits) <Realm className="org.apache.catalina.realm.JNDIRealm" debug="99" connectionURL="ldap://localhost:389" userPattern="uid={0},ou=People,o=test,dc=company,dc=uk" userSubTree="true" roleBase="ou=Roles,o=test,dc=company,dc=uk" roleName="cn" roleSearch="memberUid={1}" /> <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> web.xml (the interesting/changed bits) <security-constraint> <display-name>Security Constraint</display-name> <web-resource-collection> <web-resource-name>Protected Area</web-resource-name> <!-- Define the context-relative URL(s) to be protected --> <url-pattern>/*</url-pattern> <!-- If you list http methods, only those methods are protected --> </web-resource-collection> <auth-constraint> <!-- Anyone with one of the listed roles may access this area --> <role-name>admin</role-name> <role-name>regular</role-name> </auth-constraint> </security-constraint> <!-- Default login configuration uses form-based authentication --> <login-config> <auth-method>BASIC</auth-method> </login-config> <!-- Security roles referenced by this web application --> <security-role> <role-name>admin</role-name> <role-name>regular</role-name> </security-role> I cannot access my LDAP setup at the moment, but I believe it is alright as the login is accepted by the BASIC auth method, it's just tomcat that is rejecting it. The roles should be as defined in web.xml - admin and regular. If there is any other information you require me to provide, please just ask! My thanks in advance to anyone who can help, and my apologies for any major mistakes I have made - yesterday was pretty much the first time I'd ever heard of LDAP =D. EDIT: Fixed the second xml segment. Apologies for the formating-fail.

    Read the article

  • Background selector declaration in CSS

    - by Shivanand
    Hello, In CSS declaration for a selector is given as: background-attachment: scroll; background-color: transparent; background-image: url(/images/ucc/green/btn-part2.gif); background-repeat: no-repeat; background-position: right top; I want to optimize the code and change it to: background: scroll transparent url(/images/ucc/green/btn-part2.gif) no-repeat right top; My question is, Is this correct way and does it work in IE7/8, Firefox, Safari?

    Read the article

  • Web scraping: how to get scraper implementation from text link?

    - by isme
    I'm building a java web media-scraping application for extracting content from a variety of popular websites: youtube, facebook, rapidshare, and so on. The application will include a search capability to find content urls, but should also allow the user to paste a url into the application if they already where the media is. Youtube Downloader already does this for a variety of video sites. When the program is supplied with a URL, it decides which kind of scraper to use to get the content; for example, a youtube watch link returns a YoutubeScraper, a Facebook fanpage link returns a FacebookScraper and so on. Should I use the factory pattern to do this? My idea is that the factory has one public method. It takes a String argument representing a link, and returns a suitable implementation of the Scraper interface. I guess the Factory would hold a list of Scraper implementations, and would match the link against each Scraper until it finds a suitable one. If there is no suitable one, it throws an Exception instead.

    Read the article

  • CSS3PIE: Internet Explorer 6 doesn't download PIE.htc

    - by Jonas
    I'm using the very impressive CSS3PIE (http://css3pie.com) library to add support for CSS3 styles in IE6-8. It works fine in versions 7 and 8 and took a lot of pain out of the process. However, in IE6 no CSS3 styles are shown at all. In fact, looking at the server logs, I can see that IE6 doesn't even download the PIE.htc file, which is necessary for the magic to work. The content type for the file is set correctly as text/x-component, it's referenced by absolute URL, and works fine in IE7 and 8. I'm using Compass (www.compass-style.org) and the PIE helper which makes the CSS look like this: #shopping_cart { behavior: url("/media/static/css/PIE.htc"); position: relative; border-radius: 10px; } I can't figure out what the problem is. Does anyone have any ideas what might cause IE6 to skip the behavior definition altogether? Cheers, Jonas

    Read the article

  • Kohana3: Absolute path to a file

    - by Svish
    Say I have a file in my kohana 3 website called assets/somefile.jpg. I can get the url to that file by doing echo Url::site('assets/somefile.jpg'); // /kohana/assets/somefile.jpg Is there a way I can get the absolute path to that file? Like if I want to fopen it or get the size of the file or something like that. In other words, I would like to get something like /var/www/kohana/assets/somefile.jpg or W:\www\kohana\assets\somefile.jpg or whatever is the absolute path.

    Read the article

  • Handling credit cards and IOS

    - by Susan Jackie
    I am using NSUrlConnection asyncronous request to transmit credit card information to a secure third party server. I do the following: I get the credit card number, cvv, etc from the uitextfields. Encode the credit card information into a json format. Set as httpd body of the nsurlconnection request as follows: NSURL * url = [[NSURL URLWithString: "https://www.example.com"]; NSMutableURLRequest * request = [[NSMutableURLRequest alloc] initWithURL: url]; [request setHTTPMethod: @"POST"]; [request setValue:@"application/json" forHTTPHeaderField:@"Accept"]; [request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"]; [request setHTTPBody: [NSJSONSerialization dataWithJSONObject: params options: kNilOptions error: &parseError]]; Send this information via asynchronous request to a secure third party server: [NSURLConnection sendAsynchronousRequest:request queue: queue completionHandler:^(NSURLResponse *response, NSData *data, NSError * requestError) { What should I be considering to send user credit card information to a third party server using nsurlconnection asynchronous request?

    Read the article

  • Debug Mode for CodeIgniter?

    - by nebukadnezzar
    Does CodeIgniter provide a Debug Mode, for example, when accessing an Invalid URL? Ruby on Rails does show debugging Messages when a incorrect URL has been given, and the controller is unable to resolve it using the routes map. How would I enable such debugging messages in CodeIgniter? The profiler ... $this->output->enable_profiler(TRUE); ... only affects single classes, but not all routes. So debugging without an actual debugger mode is a little... difficult. :-)

    Read the article

  • asp.net mvc 2 web application inside a Web site?

    - by Amitabh
    I have a Asp.Net Web Site deployed as a WebSite inside IIS 7.5. http://localhost/WebSite Then I have a second Asp.Net MVC 2 web application which is deployed as Sub Application inside the above WebSite. So the mvc aplication should work on the following Url. http://localhost/WebSite/MvcApp/ The web site works fine but when I browse the mvc Url http://localhost/WebSite/MvcApp/ It gives following error. HTTP Error 403.14 - Forbidden The Web server is configured to not list the contents of this directory.

    Read the article

  • Flex: HTTP request error #2032

    - by alexey
    In Flex 3 application I use HTTPService class to make requests to the server: var http:HTTPService = new HTTPService(); http.method = 'POST'; http.url = hostUrl; http.resultFormat = 'e4x'; http.addEventListener(ResultEvent.RESULT, ...); http.addEventListener(FaultEvent.FAULT, ...); http.send(params); The application has Comet-architecture. So it makes long running requests. While waiting a response for this request, other requests can be made concurrently. The application works in most cases. But sometimes some clients get HTTP request error executing long running request: faultCode:Server.Error.Request faultString:'HTTP request error' faultDetail:'Error: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032"]. URL: 'http://example.com/ws' I think it depends on user's browser. Any ideas?

    Read the article

  • PHP robots.txt parsing

    - by omfgroflmao
    Is there an easiest way to do this? function parse_robots_txt($URL){ $parsed = parse_url($URL); $robots = file_get_contents('http://'.$parsed['host'].'/robots.txt',FILE_TEXT); $exploded = explode('user-agent:',strtolower($robots)); foreach($exploded as $user_agent){ $user_agent = trim($user_agent); if(substr($user_agent,0,1) == '*'){ $user_agent = str_replace('#','',preg_replace('/#.*\\n/i','',$user_agent)); $user_agent = str_replace('disallow:','',substr($user_agent,1)); $user_agent = preg_replace('/allow:/i', '+-+-+-+', $user_agent, 1); $user_agent = str_replace('allow:','',$user_agent); print_r(explode('+-+-+-+',$user_agent)); } } }

    Read the article

  • Bing search API using Jsonp not working, invalid label

    - by Blankman
    Struggling with Bing's json request (bing search, not map), I am getting an error back that says 'Invalid Label' My query url is: var bingurl="http://api.search.live.net/json.aspx?Appid=##APIKEY##&query=Honda&sources=web"; $.ajax({ type: "GET", url: bingurl, data: "{}", contentType: "application/json; charset=utf-8", dataType: "jsonp", success: function(data) { $callBack(data); }, error: function(msg) { alert("error" + msg); } }); Firebug reports 'invalid label' and then dumps the json response. No idea what is wrong? help appreciated.

    Read the article

  • Is there a more efficient way to get the number of search results from a google query?

    - by highone
    Right now I am using this code: string url = "http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=hey&esrch=FT1"; string source = getPageSource(url); string[] stringSeparators = new string[] { "<b>", "</b>" }; string[] b = source.Split(stringSeparators, StringSplitOptions.None); bool isResultNum = false; foreach (string s in b) { if (isResultNum) { MessageBox.Show(s.Replace(",", "")); return; } if (s.Contains(" of about ")) { isResultNum = true; } } Unfortunately it is very slow, is there a better way to do it? Also is it legal to query google like this? From the answer in this question it didn't sound like it was http://stackoverflow.com/questions/903747/how-to-download-google-search-results

    Read the article

  • iPhone Development: Get images from RSS feed

    - by Matthew Saeger
    I am using the NSXMLParser to get new RSS stories from a feed and am displaying them in a UITableView. However now I want to take ONLY the images, and display them in a UIScrollView/UIImageView (3 images side-by side). I am completely lost. I am using the following code to obtain 1 image from a URL. NSURL *theUrl1=[NSURL URLWithString:@"http://farm3.static.flickr.com/2586/4072164719_0fa5695f59.jpg"]; JImage *photoImage1=[[JImage alloc] init]; [photoImage1 setContentMode:UIViewContentModeScaleAspectFill]; [photoImage1 setFrame:CGRectMake(0, 0, 320, 170)]; [photoImage1 initWithImageAtURL:theUrl1]; [imageView1 addSubview:photoImage1]; [photoImage1 release]; This is all I have accomplished, and it works, for one image, and I have to specify the exact URL. What would you recommend I do to accomplish this?

    Read the article

  • Qooxdoo REST JSON request problem - unexpected token and then timeout

    - by freiksenet
    Hello! I am learning Qooxdoo framework and I am trying to make it work with a small Django web service. Django webservice just returns JSON data like this: { "name": "Football", "description": "The most popular sport." } Then I use the following code to query that url: var req = new qx.io.remote.Request(url, "GET", "application/json"); req.toggleCrossDomain(); req.addListener("completed", function(e) { alert(e.getContent()); }); req.send(); Unfortunately when I execute the code I get unexpected token error and then request timeouts. Uncaught SyntaxError: Unexpected token : Native.js:91013011 qx.io.remote.RequestQueue[246]: Timeout: transport 248 Native.js:91013011 qx.io.remote.RequestQueue[246]: 5036ms > 5000ms Native.js:91013013 qx.io.remote.Exchange[248]: Timeout: implementation 249 JSLint reports that this is a valid JSON, so I wonder why Qooxdoo doesn't parse it correctly.

    Read the article

  • How come I get a timed-out when I try to download something off my own domain?

    - by alex
    def download(source_url): socket.setdefaulttimeout(10) agents = ['Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)','Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.1)','Microsoft Internet Explorer/4.0b1 (Windows 95)','Opera/8.00 (Windows NT 5.1; U; en)'] ree = urllib2.Request(source_url) ree.add_header('User-Agent',random.choice(agents)) resp = urllib2.urlopen(ree) htmlSource = resp.read() return htmlSource url = "http://myIP/details/?id=4" result_html = download(url) It shouldn't time out...even with the 10 second timeout..

    Read the article

  • Extract form variable in AJAX response using jquery

    - by Jake
    All, I have a Jquery ajax request calling out a URL. The ajax response I receive is an HTML form with one hidden variable in it. As soon as my ajax request is successful, I would like to retrieve the value of the hidden variabl. How do I do that? Example: html_response for the AJAX call is : <html><head></head><body><form name="frmValues"><input type="hidden" name="priceValue" value="100"></form></body></html> $.ajax({ type: 'GET', url: "/abc/xyz/getName?id="+101, cache: false, dataType: "html", success: function(html_response) { //Extract form variable "priceValue" from html_response //Alert the variable data. } }); Thanks

    Read the article

  • Reporting Services as PDF through WebRequest in C# 3.5 "Not Supported File Type"

    - by Heath Allison
    I've inherited a legacy application that is supposed to grab an on the fly pdf from a reporting services server. Everything works fine up until the point where you try to open the pdf being returned and adobe acrobat tells you: Adobe Reader could not open 'thisStoopidReport'.pdf' because it is either not a supported file type or because the file has been damaged(for example, it was sent as an email attachment and wasn't correctly decoded). I've done some initial troubleshooting on this. If I replace the url in the WebRequest.Create() call with a valid pdf file on my local machine ie: @"C:temp/validpdf.pdf") then I get a valid PDF. The report itself seems to work fine. If I manually type the URL to the reporting services report that should generate the pdf file I am prompted for user authentication. But after supplying it I get a valid pdf file. I've replace the actual url,username,userpass and domain strings in the code below with bogus values for obvious reasons. WebRequest request = WebRequest.Create(@"http://x.x.x.x/reportServer?/reports/reportNam&rs:format=pdf&rs:command=render&rc:parameters=blahblahblah"); int totalSize = 0; request.Credentials = new NetworkCredential("validUser", "validPass", "validDomain"); request.Timeout = 360000; // 6 minutes in milliseconds. request.Method = WebRequestMethods.Http.Post; request.ContentLength = 0; WebResponse response = request.GetResponse(); Response.Clear(); BinaryReader reader = new BinaryReader(response.GetResponseStream()); Byte[] buffer = new byte[2048]; int count = reader.Read(buffer, 0, 2048); while (count > 0) { totalSize += count; Response.OutputStream.Write(buffer, 0, count); count = reader.Read(buffer, 0, 2048); } Response.ContentType = "application/pdf"; Response.Cache.SetCacheability(HttpCacheability.Private); Response.CacheControl = "private"; Response.Expires = 30; Response.AddHeader("Content-Disposition", "attachment; filename=thisStoopidReport.pdf"); Response.AddHeader("Content-Length", totalSize.ToString()); reader.Close(); Response.Flush(); Response.End();

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >