Search Results

Search found 349 results on 14 pages for 'webrequest'.

Page 11/14 | < Previous Page | 7 8 9 10 11 12 13 14  | Next Page >

  • Linq to SQL generates StackOverflowException in tight Insert loop

    - by ChrisW
    I'm parsing an XML file and inserting the rows into a table (and related tables) using LinqToSQL. I parse the XML file using LinqToXml into IEnumerable. Then, I create a foreach loop, where I build my LinqToSQL objects and call InsertOnSubmit and SubmitChanges at the end of each loop. Nothing special here. Usually, I make it through around 4,100 records before receiving a StackOverflowException from LinqToSql, right as I call SubmitChanges. It's not always on 4,100... sometimes it's 4102, sometimes, less, etc. I've tried inserting the records that generate the failure individually, but putting them in their own Xml file, but that inserts fine... so it's not the data. I'm running the whole process from an MVC2 app that is uploading the Xml file to the server. I've adjusted my WebRequest timeouts to appropriate values, and again, I'm not getting timeout errors, just StackOverflowExceptions. So is there some pattern that I should follow for times when I have to do many insertions into the database? I never encounter this exception on smaller Xml files, just larger ones.

    Read the article

  • Display post data !

    - by Comii
    I am trying to post data from vb.net application to web service asmx that is located on server! For posting data from vb.net application I am using this code: Public Function Post(ByVal url As String, ByVal data As String) As String Dim vystup As String = Nothing Try 'Our postvars Dim buffer As Byte() = Encoding.ASCII.GetBytes(data) 'Initialisation, we use localhost, change if appliable Dim WebReq As HttpWebRequest = DirectCast(WebRequest.Create(url), HttpWebRequest) 'Our method is post, otherwise the buffer (postvars) would be useless WebReq.Method = "POST" 'We use form contentType, for the postvars. WebReq.ContentType = "application/x-www-form-urlencoded" 'The length of the buffer (postvars) is used as contentlength. WebReq.ContentLength = buffer.Length 'We open a stream for writing the postvars Dim PostData As Stream = WebReq.GetRequestStream() 'Now we write, and afterwards, we close. Closing is always important! PostData.Write(buffer, 0, buffer.Length) PostData.Close() 'Get the response handle, we have no true response yet! Dim WebResp As HttpWebResponse = DirectCast(WebReq.GetResponse(), HttpWebResponse) 'Let's show some information about the response Console.WriteLine(WebResp.StatusCode) Console.WriteLine(WebResp.Server) 'Now, we read the response (the string), and output it. Dim Answer As Stream = WebResp.GetResponseStream() Dim _Answer As New StreamReader(Answer) 'Congratulations, you just requested your first POST page, you 'can now start logging into most login forms, with your application 'Or other examples. vystup = _Answer.ReadToEnd() Catch ex As Exception MessageBox.Show(ex.Message) End Try Return vystup.Trim() & vbLf End Function Now how i can retrieve this data in asmx service?

    Read the article

  • SSL certificate pre-fetch .NET

    - by Wil P
    I am writing a utility that would allow me to monitor the health of our websites. This consists of a series of validation tasks that I can run against a web application. One of the tests is to anticipate the expiration of a particular SSL certificate. I am looking for a way to pre-fetch the SSL certificate installed on a web site using .NET or a WINAPI so that I can validate the expiration date of the certificate associated with a particular website. One way I could do this is to cache the certificates when they are validated in the ServicePointManager.ServerCertificateValidationCallback handler and then match them up with configured web sites, but this seems a bit hackish. Another would be to configure the application with the certificate for the website, but I'd rather avoid this if I can in order to minimize configuration. What would be the easiest way for me to download an SSL certificate associated with a website using .NET so that I can inspect the information the certificate contains to validate it? EDIT: To extend on the answer below there is no need to manually create the ServicePoint prior to creating the request. It is generated on the request object as part of executing the request. private static string GetSSLExpiration(string url) { HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest; using (WebResponse response = request.GetResponse()) { } if (request.ServicePoint.Certificate != null) { return request.ServicePoint.Certificate.GetExpirationDateString(); } else { return string.Empty; } }

    Read the article

  • Can't connect to HTTPS using X509 client certificate

    - by wows
    Hi - I'm new to cryptography and I'm a bit stuck: I'm trying to connect (from my development environment) to a web service using HTTPS. The web service requires a client certificate - which I think I've installed correctly. They have supplied me with a .PFX file. In Windows 7, I double clicked the file to install it into my Current User - Personal certificate store. I then exported a X509 Base-64 encoded .cer file from the certificate entry in the store. It didn't have a private key associate with it. Then, in my app, I'm attempting to connect to the service like this: var certificate = X509Certificate.CreateFromCertFile("xyz.cer")); var serviceUrl = "https://xyz"; var request = (HttpWebRequest) WebRequest.Create(serviceUrl); request.ClientCertificates.Add(certificate); request.Method = WebRequestMethods.Http.Post; request.ContentType = "application/x-www-form-urlencoded"; I get a 502 Connection failed when I connect. Is there anything you can see wrong with this method? Our production environment seems to work with a similar configuration, but it's running Windows Server 2003. Thanks!

    Read the article

  • 550 Error When I try to get the size of a file on an FTP

    - by Eric
    I'm trying to use an FtpWebRequest to get the size of a file on a company FTP. Yet whenever I try to get the response an exception is thrown. See the error details in the catch block in the code below. string uri = "ftp://ftp.domain.com/folder/folder/file.xxx"; FtpWebRequest sizeReq = (FtpWebRequest)WebRequest.Create(uri); sizeReq.Method = WebRequestMethods.Ftp.GetFileSize; sizeReq.Credentials = cred; sizeReq.UsePassive = proj.ServerConfig.UsePassive; //true sizeReq.UseBinary = proj.ServerConfig.UseBinary; //true sizeReq.KeepAlive = proj.ServerConfig.KeepAlive; //false long size; try { //Exception thrown here when I try to get the response using (FtpWebResponse fileSizeResponse = (FtpWebResponse)sizeReq.GetResponse()) { size = fileSizeResponse.ContentLength; } } catch(WebException exp) { FtpWebResponse resp = (FtpWebResponse)exp.Response; MessageBox.Show(exp.Message); // "The remote server returned an error: (550) File unavailable (e.g., file not found, no access)." MessageBox.Show(exp.Status.ToString()); //ProtcolError MessageBox.Show(resp.StatusCode.ToString()); // ActionNotTakenFileUnavailable MessageBox.Show(resp.StatusDescription.ToString()); //"550 SIZE: Operation not permitted\r\n" } This code does work, however, when connected to my personal FTP. The StatusDescription of the response says that the operation is "not permitted". Could it be that my office FTP just wont allow for the querying of a file size? I've also tried listing the directory details, which will return the size, and have noticed that my office FTP reports the directory details in a different format then my personal FTP. Maybe this is the problem? //work ftp ListDirectoryDetails -rw-r--r-- 1 (?) user 12345 Nov 16 20:28 some file name.xxx //personal ftp ListDirectoryDetails -rw-r--r-- 1 user user 12345 Mar 13 some file name.xxx From reading this blog post I think that my personal ftp is returning a Unix formatted response, but my work is returning a windows formatted response. Maybe this is unrelated but I thought I'd mention it.

    Read the article

  • How to send a JSONObject to a REST service?

    - by Sebi
    Retrieving data from the REST Server works well, but if I want to post an object it doesn't work: public static void postJSONObject(int store_type, FavoriteItem favorite, String token, String objectName) { String url = ""; switch(store_type) { case STORE_PROJECT: url = URL_STORE_PROJECT_PART1 + token + URL_STORE_PROJECT_PART2; //data = favorite.getAsJSONObject(); break; } HttpClient httpClient = new DefaultHttpClient(); HttpPost postMethod = new HttpPost(url); try { HttpEntity entity = new StringEntity("{\"ID\":0,\"Name\":\"Mein Projekt10\"}"); postMethod.setEntity(entity); HttpResponse response = httpClient.execute(postMethod); Log.i("JSONStore", "Post request, to URL: " + url); System.out.println("Status code: " + response.getStatusLine().getStatusCode()); } catch (ClientProtocolException e) { I always get a 400 Error Code. Does anybody know whats wrong? I have working C# code, but I can't convert: System.Net.WebRequest wr = System.Net.HttpWebRequest.Create("http://localhost:51273/WSUser.svc/pak3omxtEuLrzHSUSbQP/project"); wr.Method = "POST"; string data = "{\"ID\":1,\"Name\":\"Mein Projekt\"}"; byte [] d = UTF8Encoding.UTF8.GetBytes(data); wr.ContentLength = d.Length; wr.ContentType = "application/json"; wr.GetRequestStream().Write(d, 0, d.Length); System.Net.WebResponse wresp = wr.GetResponse(); System.IO.StreamReader sr = new System.IO.StreamReader(wresp.GetResponseStream()); string line = sr.ReadToEnd();

    Read the article

  • C# WebClient OpenRead url

    - by Octopus-Paul
    So i have this program that fetch a page using a short link (I used Google url shortener) to build my example i used the code from Using WebClient in C# is there a way to get the URL of a site after being redirected? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Net; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { MyWebClient client = new MyWebClient(); client.OpenRead("http://tinyurl.com/345yj7x"); Uri uri = client.ResponseUri; Console.WriteLine(uri.AbsoluteUri); Console.Read(); } } class MyWebClient : WebClient { Uri _responseUri; public Uri ResponseUri { get { return _responseUri; } } protected override WebResponse GetWebResponse(WebRequest request) { WebResponse response = base.GetWebResponse(request); _responseUri = response.ResponseUri; return response; } } } I do not understant a thing: when i do client.OpenRead("http://tinyurl.com/345yj7x"); this downloads the page that the url points to? If this method downloads the page, I need something to get me only the url, so if there a method to get only some headers, or only the url please let me know.

    Read the article

  • Link checker ; how to avoid false positives

    - by Burnzy
    I'm working a on a link checker/broken link finder and I am getting many false positives, after double checking I noticed that many error codes were returning webexceptions but they were actually downloadable, but in some other cases the statuscode is 404 and i can access the page from the browse. So here is the code, its pretty ugly, and id like to have something more, id say practical. All the status codes are in that big if are used to filter the ones i dont want to add to brokenlink because they are valid links ( i tested them all ). What i need to fix is the structure (if possible) and how to not get false 404. Thank you! try { HttpWebRequest request = ( HttpWebRequest ) WebRequest.Create ( uri ); request.Method = "Head"; request.MaximumResponseHeadersLength = 32; // FOR IE SLOW SPEED request.AllowAutoRedirect = true; using ( HttpWebResponse response = ( HttpWebResponse ) request.GetResponse() ) { request.Abort(); } /* WebClient wc = new WebClient(); wc.DownloadString( uri ); */ _validlinks.Add ( strUri ); } catch ( WebException wex ) { if ( !wex.Message.Contains ( "The remote name could not be resolved:" ) && wex.Status != WebExceptionStatus.ServerProtocolViolation ) { if ( wex.Status != WebExceptionStatus.Timeout ) { HttpStatusCode code = ( ( HttpWebResponse ) wex.Response ).StatusCode; if ( code != HttpStatusCode.OK && code != HttpStatusCode.BadRequest && code != HttpStatusCode.Accepted && code != HttpStatusCode.InternalServerError && code != HttpStatusCode.Forbidden && code != HttpStatusCode.Redirect && code != HttpStatusCode.Found ) { _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); } else _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); }

    Read the article

  • Problem pulling data from website in .NET and C#

    - by Cptcecil
    I have written a web scraping program to go to a list of pages and write all the html to a file. The problem is that when I pull a block of text some of the characters get written as '?'. How do I pull those characters into my text file? Here is my code: string baseUri = String.Format("http://www.rogersmushrooms.com/gallery/loadimage.asp?did={0}&blockName={1}", id.ToString(), name.Trim()); // our third request is for the actual webpage after the login. HttpWebRequest request = (HttpWebRequest)WebRequest.Create(baseUri); request.Method = "GET"; request.UserAgent = "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1)"; //get the response object, so that we may get the session cookie. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); StreamReader reader = new StreamReader(response.GetResponseStream()); // and read the response string page = reader.ReadToEnd(); StreamWriter SW; string filename = string.Format("{0}.txt", id.ToString()); SW = File.AppendText("C:\\Share\\" + filename); SW.Write(page); reader.Close(); response.Close();

    Read the article

  • WCF JSON Service returns XML on Fault

    - by Anthony Johnston
    I am running a ServiceHost to test one of my services and all works fine until I throw a FaultException - bang I get XML not JSON my service contract - lovely /// <summary> /// <para>Get category by id</para> /// </summary> [OperationContract(AsyncPattern = true)] [FaultContract(typeof(CategoryNotFound))] [FaultContract(typeof(UnexpectedExceptionDetail))] IAsyncResult BeginCategoryById( CategoryByIdRequest request, AsyncCallback callback, object state); CategoryByIdResponse EndCategoryById(IAsyncResult result); Host Set-up - scrummy yum var host = new ServiceHost(serviceType, new Uri(serviceUrl)); host.AddServiceEndpoint( serviceContract, new WebHttpBinding(), "") .Behaviors.Add( new WebHttpBehavior { DefaultBodyStyle = WebMessageBodyStyle.Bare, DefaultOutgoingResponseFormat = WebMessageFormat.Json, FaultExceptionEnabled = true }); host.Open(); Here's the call - oo belly ache var request = WebRequest.Create(serviceUrl + "/" + serviceName); request.Method = "POST"; request.ContentType = "application/json; charset=utf-8"; request.ContentLength = 0; try { // receive response using (var response = request.GetResponse()) { var responseStream = response.GetResponseStream(); // convert back into referenced object for verification var deserialiser = new DataContractJsonSerializer(typeof (TResponseData)); return (TResponseData) deserialiser.ReadObject(responseStream); } } catch (WebException wex) { var response = wex.Response; using (var responseStream = response.GetResponseStream()) { // convert back into fault //var deserialiser = new DataContractJsonSerializer(typeof(FaultException<CategoryNotFound>)); //var fex = (FaultException<CategoryNotFound>)deserialiser.ReadObject(responseStream); var text = new StreamReader(responseStream).ReadToEnd(); var fex = new Exception(text, wex); Logger.Error(fex); throw fex; } } the text var contains the correct fault, but serialized as Xml What have I done wrong here?

    Read the article

  • The following code to check if a file exists on a server does not work

    - by xplorer2k
    Hi Everyone, I found the following code to check if a file exists on a server, but is not working for me. It tells me that "test1.tx" does not exist even though the file exists and its size is 498 bytes. If I try with Ftp.ListDirectory it tells me that the file does not exist. If I try with Ftp.GetFileSize it does not provide any results and the debugger's immediate gives me the following message: A first chance exception of type 'System.Net.WebException' occurred in System.dll. Using "request.UseBinary = true" does not make any difference. I have posted this same question at this link: http://social.msdn.microsoft.com/Forums/en-US/ncl/thread/89e05cf3-189f-48b7-ba28-f93b1a9d44ae Could someone help me how to fix it? private void button1_Click(object sender, EventArgs e) { string ftpServerIP = txtIPaddress.Text.Trim(); string ftpUserID = txtUsername.Text.Trim(); string ftpPassword = txtPassword.Text.Trim(); try { FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://" + ftpServerIP + "//tmp/test1.txt"); request.Method = WebRequestMethods.Ftp.ListDirectory; //request.Method = WebRequestMethods.Ftp.GetFileSize; request.Credentials = new NetworkCredential(ftpUserID, ftpPassword); //request.UseBinary = true; using (FtpWebResponse response = (FtpWebResponse)request.GetResponse()) { // Okay. textBox1.AppendText(Environment.NewLine); textBox1.AppendText("File exist"); } } catch (WebException ex) { if (ex.Response != null) { FtpWebResponse response = (FtpWebResponse)ex.Response; if (response.StatusCode == FtpStatusCode.ActionNotTakenFileUnavailable) { // Directory not found. textBox1.AppendText(Environment.NewLine); textBox1.AppendText("File does not exist"); } } } } Thanks very much, xplorer2k

    Read the article

  • multi-threaded proxy checker having problems

    - by Paul
    hello everyone, I am trying to create a proxy checker. This is my first attempt at multithreading and it's not going so well, the threads seem to be waiting for one to complete before initializing the next. Imports System.Net Imports System.IO Imports System.Threading Public Class Form1 Public sFileName As String Public srFileReader As System.IO.StreamReader Public sInputLine As String Public Class WebCall Public proxy As String Public htmlout As String Public Sub New(ByVal proxy As String) Me.proxy = proxy End Sub Public Event ThreadComplete(ByVal htmlout As String) Public Sub send() Dim myWebRequest As HttpWebRequest = CType(WebRequest.Create("http://www.myserver.com/ip.php"), HttpWebRequest) myWebRequest.Proxy = New WebProxy(proxy, False) Try Dim myWebResponse As HttpWebResponse = CType(myWebRequest.GetResponse(), HttpWebResponse) Dim loResponseStream As StreamReader = New StreamReader(myWebResponse.GetResponseStream()) htmlout = loResponseStream.ReadToEnd() Debug.WriteLine("Finished - " & htmlout) RaiseEvent ThreadComplete(htmlout) Catch ex As WebException If (ex.Status = WebExceptionStatus.ConnectFailure) Then End If Debug.WriteLine("Failed - " & proxy) End Try End Sub End Class Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim proxy As String Dim webArray As New ArrayList() Dim n As Integer For n = 0 To 2 proxy = srFileReader.ReadLine() webArray.Add(New WebCall(proxy)) Next Dim w As WebCall For Each w In webArray Threading.ThreadPool.QueueUserWorkItem(New WaitCallback(AddressOf w.send), w) Next w End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load srFileReader = System.IO.File.OpenText("proxies.txt") End Sub End Class

    Read the article

  • How to Capture a live stream from Windows Media Server 2008

    - by Hummad Hassan
    I want to capture the live stream from windows media server to filesystem on my pc I have tried with my own media server with the following code. but when i have checked the out put file i have found this in it. FileStream fs = null; try { HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://mywmsserver/test"); CookieContainer ci = new CookieContainer(1000); req.Timeout = 60000; req.Method = "Get"; req.KeepAlive = true; req.MaximumAutomaticRedirections = 99; req.UseDefaultCredentials = true; req.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"; req.ReadWriteTimeout = 90000000; req.CookieContainer = ci; //req.MediaType = "video/x-ms-asf"; req.AllowWriteStreamBuffering = true; HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Stream resps = resp.GetResponseStream(); fs = new FileStream("d:\\dump.wmv", FileMode.Create, FileAccess.ReadWrite); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = resps.Read(buffer, 0, buffer.Length)) > 0) { fs.Write(buffer, 0, bytesRead); } } catch (Exception ex) { } finally { if (fs != null) fs.Close(); }

    Read the article

  • ASP.NET Memory Usage in IIS is FAR greater than in DevEnv. Is this normal?

    - by Tom
    Greetings! I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process. There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS. The issue: When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly. When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page. Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic. Is this normal? If I bump the server RAM up to 4GB, will that be enough? Will multiple users require even more memory? Could the culprit be AJAX or the list of objects? Thanks for any insight you can provide.

    Read the article

  • .NET: Is it possible to get HttpWebRequest to automatically decompress gzip'd responses?

    - by Cheeso
    In this answer, I described how I resorted to wrappnig a GZipStream around the response stream in a HttpWebResponse, in order to decompress it. The relevant code looks like this: HttpWebRequest hwr = (HttpWebRequest) WebRequest.Create(url); hwr.CookieContainer = PersistentCookies.GetCookieContainerForUrl(url); hwr.Accept = "text/xml, */*"; hwr.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip, deflate"); hwr.Headers.Add(HttpRequestHeader.AcceptLanguage, "en-us"); hwr.UserAgent = "My special app"; hwr.KeepAlive = true; var resp = (HttpWebResponse) hwr.GetResponse(); using(Stream s = resp.GetResponseStream()) { Stream s2 = s; if (resp.ContentEncoding.ToLower().Contains("gzip")) s2 = new GZipStream(s2, CompressionMode.Decompress); else if (resp.ContentEncoding.ToLower().Contains("deflate")) s2 = new DeflateStream(s2, CompressionMode.Decompress); ... use s2 ... } Is there a way to get HttpWebResponse to provide a de-compressing stream, automatically? In other words, a way for me to eliminate the following from the above code: Stream s2 = s; if (resp.ContentEncoding.ToLower().Contains("gzip")) s2 = new GZipStream(s2, CompressionMode.Decompress); else if (resp.ContentEncoding.ToLower().Contains("deflate")) s2 = new DeflateStream(s2, CompressionMode.Decompress); Thanks.

    Read the article

  • how to get the source code as register user.

    - by nir143
    hi. i downloaded a sourcecode of a site,but i downloaded it i saw it identify my program as a guest,i search at google and figure out that i can send a cookie when i "ask" the source code. that what i have managed to do and it still dont identify me as register user: CookieContainer cj = new CookieContainer(); string all = ""; HttpWebRequest req = (HttpWebRequest)WebRequest.Create(Url); req.CookieContainer = cj; HttpWebResponse res = (HttpWebResponse)req.GetResponse(); CookieCollection cs=cj.GetCookies(req.RequestUri); CookieContainer cc = new CookieContainer(); cc.Add(cs); req.CookieContainer = cc; StreamReader read = new StreamReader(res.GetResponseStream()); all = read.ReadToEnd(); read.Close(); return all; what is wrong here? tyvm for help:) (if that help,i can have a simple details of a register user of the site).

    Read the article

  • Download Current WSJ.com Prime Rate

    - by Registered User
    I need to automatically download the current Wall Street Journal Prime Rate and load the data into my database. What is the best method for downloading this data automatically? I have come up with three possible solutions for doing this: Scrape a HTML web page from WSJ. Parse a RSS news feed from WSJ. Use some API that I haven't found from WSJ. Regarding solution 1, although I don't like solution 1 since it could easily break, it's the only one that I have worked out from end to end. It appears I can scrape this page with a WebRequest / WebResponse and read the text in this code: <tr> <td style="text-align:left" class="colhead">&nbsp;</td> <td class="colhead">Latest</td> <td class="colhead">Wk ago</td> <td class="colhead">High</td> <td class="colhead">Low</td> </tr> <tr> <td class="text">U.S.</td> <td style="font-weight:bold;" class="num">3.25</td> <td class="num">3.25</td> <td class="num">3.25</td> <td class="num" style="border-right:0px">3.25</td> </tr> Regarding solution 2, although I can implement a RSS reader solution, I don't see a way to reliably anticipate verbiage for changes in the Prime Rate. Therefore, I don't think this is as safe or reliable a way to get the data as solution 1. Regarding solution 3, I haven't found any published API's for checking money rates like the Prime Rate. If anyone knows of a web service or other API for checking money rates, then please let me know.

    Read the article

  • login to website with post method

    - by druffmuff
    I want to log in into a website with c#. Here's the html code of the forumlar: <form action="http://www.site.com/login.php" method="post" name="login" id="login"> <table border="0" cellpadding="2" cellspacing="0"> <tbody> <tr><td><b>User:</b></td><td colspan=\"2\"><b>Passwort:</b></td></tr> <tr> <td><input class="inputbg" name="user" type="text"></td> <td><input class="inputbg" name="password" type="password"></td> <td><input type="submit" name="user_control" value="Eingabe" class="buttonbg" ></td> </tr> </tbody></table></form> I actually tried it like this: HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://www.site.com/login.php"); request.Method = "POST"; using (StreamWriter writer = new StreamWriter(request.GetRequestStream(), Encoding.ASCII)) { writer.Write("user=user&password=pass&user_control=Eingabe"); } HttpWebResponse response = (HttpWebResponse)request.GetResponse(); using (StreamReader reader = new StreamReader(response.GetResponseStream())) { stream = new StreamWriter("login.html"); stream.Write(reader.ReadToEnd()); stream.Close(); } But this is not working. Any Ideas, why this is failing?

    Read the article

  • How to Capture a live stream from Windows Media Server 2008 using c#.net

    - by Hummad Hassan
    I want to capture the live stream from windows media server to filesystem on my pc I have tried with my own media server with the following code. but when i have checked the out put file i have found this in it. please help me with this. Thanks [Reference] Ref1=http://mywindowsmediaserver/test?MSWMExt=.asf Ref2=http://mywindowsmediaserver/test?MSWMExt=.asf FileStream fs = null; try { HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://mywmsserver/test"); CookieContainer ci = new CookieContainer(1000); req.Timeout = 60000; req.Method = "Get"; req.KeepAlive = true; req.MaximumAutomaticRedirections = 99; req.UseDefaultCredentials = true; req.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"; req.ReadWriteTimeout = 90000000; req.CookieContainer = ci; //req.MediaType = "video/x-ms-asf"; req.AllowWriteStreamBuffering = true; HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Stream resps = resp.GetResponseStream(); fs = new FileStream("d:\\dump.wmv", FileMode.Create, FileAccess.ReadWrite); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = resps.Read(buffer, 0, buffer.Length)) > 0) { fs.Write(buffer, 0, bytesRead); } } catch (Exception ex) { } finally { if (fs != null) fs.Close(); }

    Read the article

  • Subscribe through API .net C#

    - by Younes
    I have to submit subscription data to another website. I have got documentation on how to use this API however i'm not 100% sure of how to set this up. I do have all the information needed, like username / passwords etc. This is the API documentation: https://www.apiemail.net/api/documentation/?SID=4 How would my request / post / whatever look like in C# .net (vs 2008) when i'm trying to acces this API? This is what i have now, I think i'm not on the right track: public static string GArequestResponseHelper(string url, string token, string username, string password) { HttpWebRequest myRequest = (HttpWebRequest)WebRequest.Create(url); myRequest.Headers.Add("Username: " + username); myRequest.Headers.Add("Password: " + password); HttpWebResponse myResponse = (HttpWebResponse)myRequest.GetResponse(); Stream responseBody = myResponse.GetResponseStream(); Encoding encode = System.Text.Encoding.GetEncoding("utf-8"); StreamReader readStream = new StreamReader(responseBody, encode); //return string itself (easier to work with) return readStream.ReadToEnd(); Hope someone knows how to set this up properly. Thx!

    Read the article

  • PAYPAL IPN Response Problem

    - by Gorkem Tolan
    I am having a problem with Paypal IPN response. After payment is made by the customer, paypal ipn returns this url www.mywebsite.com?orderid=32&tx=2AC67201DL3533325&st=Pending&amt=2.50&cc=USD&cm=&item_number=32 There are a couple of issues 1- Postback field names are undefined or missing. Thus I can get the INVALID message. I am not sure if my website does not read POST variables. When I looked at IPN history, it shows that each IPN has been sent with the complete url. 2- Payment status keeps coming Pending. Does this issue cause the first issue? Thank you for your responses in advance. Here is the code: Dim strSandbox As String, strLive As String Dim req As HttpWebRequest strSandbox = "http://www.sandbox.paypal.com/cgi-bin/webscr/" strLive = "https://www.paypal.com/cgi-bin/webscr" req = CType(WebRequest.Create(strSandbox), HttpWebRequest) 'Set values for the request back req.Method = "POST" req.ContentType = "application/x-www-form-urlencoded" Dim param() As Byte param = Request.BinaryRead(HttpContext.Current.Request.ContentLength) Dim strRequest As String strRequest = Encoding.ASCII.GetString(param) strRequest = strRequest & "&cmd=_notify-validate" req.ContentLength = strRequest.Length 'Response.Write(strRequest) 'Send the request to PayPal and get the response Dim streamOut As StreamWriter streamOut = New StreamWriter(req.GetRequestStream(), System.Text.Encoding.ASCII) streamOut.Write(strRequest) streamOut.Close() Dim streamIn As StreamReader streamIn = New StreamReader(req.GetResponse().GetResponseStream()) Dim strResponse As String strResponse = streamIn.ReadToEnd() Response.Write(strResponse) streamIn.Close() If (strResponse = "VERIFIED") Then Response.Redirect("thankyou.aspx") ElseIf (strResponse = "INVALID") Then End If

    Read the article

  • reading non-english html pages with c#

    - by Gal Miller
    I am trying to find a string in Hebrew in a website. The reading code is attached. Afterward I try to read the file using streamReader but I can't match strings in other languages. what am I suppose to do? // used on each read operation byte[] buf = new byte[8192]; // prepare the web page we will be asking for HttpWebRequest request = (HttpWebRequest) WebRequest.Create("http://www.webPage.co.il"); // execute the request HttpWebResponse response = (HttpWebResponse) request.GetResponse(); // we will read data via the response stream Stream resStream = response.GetResponseStream(); string tempString = null; int count = 0; FileStream fileDump = new FileStream(@"c:\dump.txt", FileMode.Create); do { count = resStream.Read(buf, 0, buf.Length); fileDump.Write(buf, 0, buf.Length); } while (count > 0); // any more data to read? fileDump.Close();

    Read the article

  • Combining the streams:Web application

    - by Surendra J
    This question deals mainly with streams in web application in .net. In my webapplication I will display as follows: bottle.doc sheet.xls presentation.ppt stackof.jpg Button I will keep checkbox for each one to select. Suppose a user selected the four files and clicked the button,which I kept under. Then I instantiate clasees for each type of file to convert into pdf, which I wrote already and converted them into pdf and return them. My problem is the clases is able to read the data form URL and convert them into pdf. But I don't know how to return the streams and merge them. string url = @"url"; //Prepare the web page we will be asking for HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "GET"; request.ContentType = "application/mspowerpoint"; request.UserAgent = "Mozilla/4.0+(compatible;+MSIE+5.01;+Windows+NT+5.0"; //Execute the request HttpWebResponse response = (HttpWebResponse)request.GetResponse(); //We will read data via the response stream Stream resStream = response.GetResponseStream(); //Write content into the MemoryStream BinaryReader resReader = new BinaryReader(resStream); MemoryStream PresentaionStream = new MemoryStream(resReader.ReadBytes((int)response.ContentLength)); //convert the presention stream into pdf and save it to local disk. But I would like to return the stream again. How can I achieve this any Ideas are welcome.

    Read the article

  • 401 Unauthorized returned on GET request (https) with correct credentials

    - by Johnny Grass
    I am trying to login to my web app using HttpWebRequest but I keep getting the following error: System.Net.WebException: The remote server returned an error: (401) Unauthorized. Fiddler has the following output: Result Protocol Host URL 200 HTTP CONNECT mysite.com:443 302 HTTPS mysite.com /auth 401 HTTP mysite.com /auth This is what I'm doing: // to ignore SSL certificate errors public bool AcceptAllCertifications(object sender, System.Security.Cryptography.X509Certificates.X509Certificate certification, System.Security.Cryptography.X509Certificates.X509Chain chain, System.Net.Security.SslPolicyErrors sslPolicyErrors) { return true; } try { // request Uri uri = new Uri("https://mysite.com/auth"); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri) as HttpWebRequest; request.Accept = "application/xml"; // authentication string user = "user"; string pwd = "secret"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); request.Headers.Add("Authorization", auth); ServicePointManager.ServerCertificateValidationCallback = new System.Net.Security.RemoteCertificateValidationCallback(AcceptAllCertifications); // response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Display Stream dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); string responseFromServer = reader.ReadToEnd(); Console.WriteLine(responseFromServer); // Cleanup reader.Close(); dataStream.Close(); response.Close(); } catch (WebException webEx) { Console.Write(webEx.ToString()); } I am able to log in to the same site with no problem using ASIHTTPRequest in a Mac app like this: NSURL *login_url = [NSURL URLWithString:@"https://mysite.com/auth"]; ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:login_url]; [request setDelegate:self]; [request setUsername:name]; [request setPassword:pwd]; [request setRequestMethod:@"GET"]; [request addRequestHeader:@"Accept" value:@"application/xml"]; [request startAsynchronous];

    Read the article

  • Getting 404 page not found connecting to webDAV

    - by Cragly
    I am trying to connect to a secure webDAV folder and download a file. I am having problems just trying to get a response from the server and as it keeps giving me a 404 page not found error as soon as I call Request.GetResponse(). I can connect to the webDAV folder using Windows Explorer by mapping a drive but cannot seem to do this in code. I have looked at other post on this site and others online but most seem to concentrate on connecting to Outlook. Has anybody else had this issue? The code I am using is as follows: string URI = "https://transfer.mycompany.com/myDirectory/myFile.csv"; string username = "username"; string password = "password"; Request = (HttpWebRequest) WebRequest.Create(URI); Request.Credentials = new NetworkCredential(username, password); Request.Method = WebRequestMethods.Http.Get; Request.Headers.Add("Translate", "f"); Response = (HttpWebResponse) Request.GetResponse(); contentLength = Convert.ToInt64(Response.GetResponseHeader("Content-Length"));

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14  | Next Page >