Search Results

Search found 53597 results on 2144 pages for 'http requests'.

Page 204/2144 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Problem requesting a HTTPS with TCL

    - by Javier
    Hi Everybody, I'm trying to do the following request using TCL (OpenACS) http::register https 443 tls::socket set url "https://encrypted.google.com" set token [http::geturl $url -timeout 30000] set status [http::status $token] set answer [http::data $token] http::cleanup $token http::unregister https The problem is that when I read the $status variable I get "eof" and the $answer variable becomes empty. I tried enabling tls V.1 http::register https 443 [list tls::socket -tls1 1] and it works only for the site https://www.galileo.edu, but not for https://encrypted.google.com. The site what I'm trying to connect is https://graph.facebook.com/me/feed?access_token=... but it doesn't work. I used curl to retrieve the contents of the pages in HTTPS and it works, I have installed OpenSSL, so I can't see the problem, there is another way to do HTTPS connections with TCL?. I can't see if this is a problem of coding (maybe I'm registered wrong the https protocol) or maybe It is a bad configuration of my server. Hope somebody helps!! Thanks!

    Read the article

  • PHP Curl and Loop based on a numeric value

    - by danit
    Im using the Twitter API to collect the number of tweets I've favorited, well to be accurate the total pages of favorited tweets. I use this URL: http://api.twitter.com/1/users/show/username.xml I grab the XML element 'favorites_count' For this example lets assume favorites_count=5 The Twitter API uses this URL to get the favorties: http://twitter.com/favorites.xml (Must be authenticated) You can only get the last 20 favorties using this URL, however you can alter the URL to include a 'page' option by adding: ?page=3 to the end of the favorites URL e.g. http://twitter.com/favorites.xml?page=2 So what I need to do is use CURL (I think) to collect the favorite tweets, but using the URL: http://twitter.com/favorites.xml?page=1 http://twitter.com/favorites.xml?page=2 http://twitter.com/favorites.xml?page=3 http://twitter.com/favorites.xml?page=4 etc... Some kind of loop to visit each URL, and collect the Tweets and then output the cotents. Can anyone help with this: - Need to use CURL to authenticate - Collect the number of pages of tweets (Already scripted this) - Then use a loop to go through each page URL based on the pages value?

    Read the article

  • Is this code thread safe?

    - by Shawn Simon
    ''' <summary> ''' Returns true if a submission by the same IP address has not been submitted in the past n minutes. ''' </summary> Protected Function EnforceMinTimeBetweenSubmissions() As Boolean Dim minTimeBetweenRequestsMinutes As Integer = 0 Dim configuredTime = ConfigurationManager.AppSettings("MinTimeBetweenSchedulingRequestsMinutes") If String.IsNullOrEmpty(configuredTime) Then Return True If (Not Integer.TryParse(configuredTime, minTimeBetweenRequestsMinutes)) _ OrElse minTimeBetweenRequestsMinutes > 1440 _ OrElse minTimeBetweenRequestsMinutes < 0 Then Throw New ApplicationException("Invalid configuration setting for AppSetting 'MinTimeBetweenSchedulingRequestsMinutes'") End If If minTimeBetweenRequestsMinutes = 0 Then Return True End If If Cache("submitted-requests") Is Nothing Then Cache("submitted-requests") = New Dictionary(Of String, Date) End If ' Remove old requests. Dim submittedRequests As Dictionary(Of String, Date) = CType(Cache("submitted-requests"), Dictionary(Of String, Date)) Dim itemsToRemove = submittedRequests.Where(Function(s) s.Value < Now).Select(Function(s) s.Key).ToList For Each key As String In itemsToRemove submittedRequests.Remove(key) Next If submittedRequests.ContainsKey(Request.UserHostAddress) Then ' User has submitted a request in the past n minutes. Return False Else submittedRequests.Add(Request.UserHostAddress, Now.AddMinutes(minTimeBetweenRequestsMinutes)) End If Return True End Function

    Read the article

  • How to pull data from a MySQL column and put it into a single array

    - by Rob
    Basically, this is what I currently use in an included file: $sites[0]['url'] = "http://example0.com"; $sites[1]['url'] = "http://example1.com"; $sites[2]['url'] = "http://example2.com"; $sites[3]['url'] = "http://example3.com"; $sites[4]['url'] = "http://example4.com"; $sites[5]['url'] = "http://example5.com"; And so I output it like so: foreach($sites as $s) But I want to make it easier to manage via a MySQL database. So my question is, how can I make it automatically add additional "$sites[x]['url'] = "http://examplex.com";" and output it appropriately from my MySQL table?

    Read the article

  • How can I efficiently group a large list of URLs by their host name in Perl?

    - by jesper
    I have text file that contains over one million URLs. I have to process this file in order to assign URLs to groups, based on host address: { 'http://www.ex1.com' = ['http://www.ex1.com/...', 'http://www.ex1.com/...', ...], 'http://www.ex2.com' = ['http://www.ex2.com/...', 'http://www.ex2.com/...', ...] } My current basic solution takes about 600 MB of RAM to do this (size of file is about 300 MB). Could you provide some more efficient ways? My current solution simply reads line by line, extracts host address by regex and puts the url into a hash. EDIT Here is my implementation (I've cut off irrelevant things): while($line = <STDIN>) { chomp($line); $line =~ /(http:\/\/.+?)(\/|$)/i; $host = "$1"; push @{$urls{$host}}, $line; } store \%urls, 'out.hash';

    Read the article

  • htaccess to redirect from domain with sub-directory url to the same url on sub-domain

    - by Ibrahm Yaser
    i have problem of redirecting from domain with sub-directory like http://mydomain.com/project/01/117q803789s92d01 or http://mydomain.com/project/08/117t803789s92d08 .. etc to always http://ww2.mydomain.com/project/01/117q803789s92d01 the other link will to http://ww2.mydomain.com/project/08/117t803789s92d08 ... etc i tried this RewriteCond %{HTTP_HOST} ^mydomain\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.mydomain\.com$ RewriteRule ^project\/\/?(.*)$ "http\:\/\/ww2\.mydomain\.com\/project\/$1" [R=301,L] but for some reason its always redirect me to wrong with missing "\" like if i try to access http://mydomain.com/project/01/117q803789s92d01 redirect me to http://ww2.mydomain.com/project01/117q803789s92d01 Did I miss something?!

    Read the article

  • Qhttp request and response debugging.

    - by William Wilson
    OS: Windows XP/Vista Qt version: 4.6.1 Using OpenSSL I need to watch the actual requests and responses that is going through the wire for QHttp requests and responses and in some cases need to interrupt the request. I tried with few of the http debuggers available in the market but they seem to work only for requests that are using the WinInet functions. Unfortunately, the openssldump utility is not present on windows platforms. Thank you.

    Read the article

  • parsing FireFox bookmarks using regular expression

    - by SIFE
    I tried to parse firefox bookmark(JSON exported version), using this efforts: cat boo.json | grep '\"uri\"\:\"^http\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}\"' cat boo.json | grep '"uri"\:"^http\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}' cat boo.json | grep '"uri"\:"^http\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}"' And few others but all fails, json bookmarked file will look like this: .........."uri":"http://www.google.com/?"......"uri":"http://stackoverflow.com/" So, the output should be like this: "uri":"http://www.google.com/?" "uri":"http://stackoverflow.com/" What is the missing part on my regular expression?

    Read the article

  • sendmail and MX records when mail server is not on web host

    - by Jim Nelson
    This is a problem I'm sure is easy to fix, but I've been banging my head on it all day. I'm developing a new web site for a client. The web site resides at (this is an example) website.com. I have a PHP form script to email visitors' requests to [email protected]. When I coded this on a staging server on a different domain, all worked fine. When I moved it to website.com, the mail messages never arrived. The web server is on a virtual host with a major ISP. Here's what I've learned since then: My client's mail server is Microsoft Exchange on a box physically in their office. Whenever someone on the outside world emails [email protected], the mail arrives. But if the web server sends to the same email address, it fails every time. This is not a PHP problem. I secure shell in to the web server and have tested this both with sendmail and the UNIX mail application. I've also tested it by emailing various email accounts from the shell. I can email myself, for example, just nobody at the website.com domain. In short, when I'm logged in to website.com, mail to [email protected], [email protected], [email protected] all fail. All other addresses work fine. What I've discovered is those dropped emails are routed to the web server's "catchall" account where they sit in its inbox. I've done an MX lookup on website.com. The MX record points to mailsec.website.com. I can telnet to mailsec.website.com port 25 and see the SMTP server. It appears to me that website.com isn't doing an MX lookup when it's sending mail to [email protected]. My theory is that it recognizes the domain as local, sees that there's no "requests" user account to deliver it to, and drops the mail into the catchall account. What I want is to force sendmail to do the MX lookup and send the message on to the Exchange server. I'm at wit's end here. I can't figure out how to do this. For that matter, I may be way off base here and have misdiagnosed this entirely. Internet mail and MX has always seemed a black art to me, and my ignorance is certainly showing in this question.

    Read the article

  • Mysterious constraints problem with SQL Server 2000

    - by Ramon
    Hi all I'm getting the following error from a VB NET web application written in VS 2003, on framework 1.1. The web app is running on Windows Server 2000, IIS 5, and is reading from a SQL server 2000 database running on the same machine. System.Data.ConstraintException: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. at System.Data.DataSet.FailedEnableConstraints() at System.Data.DataSet.EnableConstraints() at System.Data.DataSet.set_EnforceConstraints(Boolean value) at System.Data.DataTable.EndLoadData() at System.Data.Common.DbDataAdapter.FillFromReader(Object data, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillFromCommand(Object data, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) The problem appears when the web app is under a high load. The system runs fine when volume is low, but when the number of requests becomes high, the system starts rejecting incoming requests with the above exception message. Once the problem appears, very few requests actually make it through and get processed normally, about 2 in every 30. The vast majority of requests fail, until a SQL Server restart or IIS reset is performed. The system then start processing requests normally, and after some time it starts throwing the same error. The error occurs when a data adapter runs the Fill() method against a SELECT statement, to populate a strongly-typed dataset. It appears that the dataset does not like the data it is given and throws this exception. This error occurs on various SELECT statements, acting on different tables. I have regenerated the dataset and checked the relevant constraints, as well as the table from which the data is read. Both the dataset definition and the data in the table are fine. Admittedly, the hardware running both the web app and SQL Server 2000 is seriously outdated, considering the numbers of incoming requests it currently receives. The amount of RAM consumed by SQL Server is dynamically allocated, and at peak times SQL Server can consume up to 2.8 GB out of a total of 3.5 GB on the server. At first I suspected some sort of index or database corruption, but after running DBCC CHECKDB, no errors were found in the database. So now I'm wondering whether this error is a result of the hardware limitations of the system. Is it possible for SQL Server to somehow mess up the data it's supposed to pass to the dataset, resulting in constraint violation due to, say, data type/length mismatch? I tried accessing the RowError messages of the data rows in the retrieved dataset tables but I kept getting empty strings. I know that HasErrors = true for the datatables in question. I have not set the EnableConstraints = false, and I don't want to do that. Thanks in advance. Ray

    Read the article

  • How OpenStack Swift handles concurrent restful API request?

    - by Chen Xie
    I installed a swift service and was trying to know the capability of handling concurrent request. So I created massive amount of threads in Java, and sent it via the RestFUL API Not surprisingly, when the number of requests climb up, the program started to throw out exceptions. Caused by: java.net.ConnectException: Connection timed out: connect at java.net.DualStackPlainSocketImpl.connect0(Native Method) at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:69) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:157) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:378) at sun.net.www.http.HttpClient.openServer(HttpClient.java:473) at sun.net.www.http.HttpClient.(HttpClient.java:203) But can anyone tell me how that time outhappened? I am curious of how SWIFT handles those requests. Is that by queuing the requests and because there are too many requests in the queue and wait for too long time and it's just get kicked out from the queue? If this holds, does it mean that it's an asynchronized mechanism to handle requests? Thanks.

    Read the article

  • golang dynamically parsing files

    - by Brian Voelker
    For parsing files i have setup a variable for template.ParseFiles and i currently have to manually set each file. Two things: How would i be able to walk through a main folder and a multitude of subfolders and automatically add them to ParseFiles so i dont have to manually add each file individually? How would i be able to call a file with the same name in a subfolder because currently I get an error at runtime if i add same name file in ParseFiles. var templates = template.Must(template.ParseFiles( "index.html", // main file "subfolder/index.html" // subfolder with same filename errors on runtime "includes/header.html", "includes/footer.html", )) func main() { // Walk and ParseFiles filepath.Walk("files", func(path string, info os.FileInfo, err error) { if !info.IsDir() { // Add path to ParseFiles } return }) http.HandleFunc("/", home) http.ListenAndServe(":8080", nil) } func home(w http.ResponseWriter, r *http.Request) { render(w, "index.html") } func render(w http.ResponseWriter, tmpl string) { err := templates.ExecuteTemplate(w, tmpl, nil) if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) } }

    Read the article

  • How do I get client ip address using TcpClient?

    - by brendan
    I am using TcpClient to listen on a port for requests. When the requests come in from the client I want to know the client ip making the request. I've tried: Console.WriteLine(tcpClient.Client.RemoteEndPoint.ToString()); Console.WriteLine(tcpClient.Client.LocalEndPoint.ToString()); var networkStream = tcpClient.GetStream(); var pi = networkStream.GetType().GetProperty("Socket", BindingFlags.NonPublic | BindingFlags.Instance); var socketIp = ((Socket)pi.GetValue(networkStream, null)).RemoteEndPoint.ToString(); Console.WriteLine(socketIp); All of these addresses output 10.x.x.x addresses which are private addresses and are clearly not the address of the clients off my network making the requests. What can I do to get the public ip of the clients making the requests?

    Read the article

  • How to Treat Race Condition of Session in Web Application?

    - by Morgan Cheng
    I was in a ASP.NET application has heavy traffic of AJAX requests. Once a user login our web application, a session is created to store information of this user's state. Currently, our solution to keep session data consistent is quite simple and brutal: each request needs to acquire a exclusive lock before being processed. This works fine for tradition web application. But, when the web application turns to support AJAX, it turns to not efficient. It is quite possible that multiple AJAX requests are sent to server at the same time without reloading the web page. If all AJAX requests are serialized by the exclusive lock, the response is not so quick. Anyway, many AJAX requests that doesn't access same session variables are blocked as well. If we don't have a exclusive lock for each requests, then we need to treat all race condition carefully to avoid dead lock. I'm afraid that would make the code complex and buggy. So, is there any best practice to keep session data consistent and keep code simple and clean?

    Read the article

  • nodejs response speed and nginx

    - by user1502440
    I'm just started testing nodejs, and wanted to get some help in understanding following behavior: Example #1: var http = require('http'); http.createServer(function(req, res){ res.writeHeader(200, {'Content-Type': 'text/plain'}); res.end('foo'); }).listen(1001, '0.0.0.0'); Example #2: var http = require('http'); http.createServer(function(req, res){ res.writeHeader(200, {'Content-Type': 'text/plain'}); res.write('foo'); res.end('bar'); }).listen(1001, '0.0.0.0'); When testing response time in Chrome: example #1 - 6-10ms example #2 - 200-220ms But, if test both examples through nginx proxy_pass server{ listen 1011; location / { proxy_pass http://127.0.0.1:1001; } } i get this: example #1 - 4-8ms example #2 - 4-8ms I am not an expert on either nodejs or nginx, and asking if someone can explain this? nodejs - v.0.8.1 nginx - v.1.2.2

    Read the article

  • jQuery and regex for adding icons to specific links?!

    - by rayne
    I'm using jQuery to add icons to specific links, e.g. a myspace icon for sites starting with http://myspace.com etc. However, I can't figure out how to use regular expressions (if that's even possible here), to make jQuery recognize the link either with or without "www." (I'm very bad at regular expressions in general). Here are two examples: $("a[href^='http://www.last.fm']").addClass("lastfm").attr("target", "_blank"); $("a[href^='http://livejournal.com']").addClass("livejournal").attr("target", "_blank"); They work fine, but I now I want the last.fm link to work with http://last.fm, http://www.last.fm and http://www.lastfm.de. Currently it only works for www.last.fm. I also would like to make the livejournal link work with subdomains links like http://username.livejournal.com How can I do that? Thanks in advance!

    Read the article

  • Servlet doesnt appear to execute in a threaded manner

    - by RenegadeAndy
    I have developed a simple server using Tomcat which runs a servlet. The servlet calls a command line program - which takes about 20 seconds to execute then returns the result to the user via JSON. The problem is - if i make above 2 simultaneous requests, the servlet blocks until one of the previous requests is completed. An example of this can be seen below - "Im in" is the top of the servlet, and the list of results is after the servlet is executed. All requests were made at the same time - but you can clearly see they are not dealt with simultaneously. What setting do I need to change in tomcat in order to have all requests handeled at the same time? Thanks Andy Im in Im in FVFNT01 STOP_IDLE FVFNT03 STOP_IDLE FVFNT16 STOP_IDLE FVFNT17 STOP_IDLE FVFNT01 STOP_IDLE FVFNT03 STOP_IDLE FVFNT16 STOP_IDLE FVFNT17 STOP_IDLE Im in FVFNT01 STOP_IDLE FVFNT03 STOP_IDLE FVFNT16 STOP_IDLE FVFNT17 STOP_IDLE Im in FVFNT01 STOP_IDLE FVFNT03 STOP_IDLE FVFNT16 STOP_IDLE FVFNT17 STOP_IDLE Im in FVFNT01 STOP_IDLE FVFNT03 STOP_IDLE FVFNT16 STOP_IDLE FVFNT17 STOP_IDLE Im in FVFNT01 STOP_IDLE FVFNT03 STOP_IDLE FVFNT16 STOP_IDLE FVFNT17 STOP_IDLE Im in FVFNT01 STOP_IDLE FVFNT03 STOP_IDLE FVFNT16 STOP_IDLE FVFNT17 STOP_IDLE Im in FVFNT01 STOP_IDLE FVFNT03 STOP_IDLE FVFNT16 STOP_IDLE FVFNT17 STOP_IDLE

    Read the article

  • Sort multidimension array in php by more than two fields

    - by Ankita Agrawal
    I want to sort array by two fields. I mean to say I have an array like :- Array ( [0] => Array ( [name] => abc [url] => http://127.0.0.1/abc/img1.png [count] => 69 [img] => accessoire-sets_1.jpg ) [1] => Array ( [name] => abc2 [url] => http://127.0.0.1/abc/img12.png [count] => 73 [img] => ) [2] => Array ( [name] => abc45 [url] => http://127.0.0.1/abc/img122.png [count] => 15 [img] => tomahawk-kopen_1.png ) [3] => Array ( [name] => zyz [url] => http://127.0.0.1/abc/img22.png [count] => 168 [img] => ) [4] => Array ( [name] => lmn [url] => http://127.0.0.1/abc/img1222.png [count] => 10 [img] => ) [5] => Array ( [name] => qqq [url] => http://127.0.0.1/abc/img1222.png [count] => 70 [img] => ) [6] => Array ( [name] => dsa [url] => http://127.0.0.1/abc/img1112.png [count] => 43 [img] => ) [7] => Array ( [name] => wer [url] => http://127.0.0.1/abc/img172.png [count] => 228 [img] => ) [8] => Array ( [name] => hhh [url] => http://127.0.0.1/abc/img126.png [count] => 36 [img] => ) [9] => Array ( [name] => rrrt [url] => http://127.0.0.1/abc/img12.png [count] => 51 [img] => ) [10] => Array ( [name] => yyy [url] => http://127.0.0.1/abc/img12.png [count] => 22 [img] => ) [11] => Array ( [name] => cxz [url] => http://127.0.0.1/abc/img12.png [count] => 41 [img] => ) [12] => Array ( [name] => tre [url] => http://127.0.0.1/abc/img12.png [count] => 32 [img] => ) [13] => Array ( [name] => fds [url] => http://127.0.0.1/abc/img12.png [count] => 10 [img] => ) ) array WITHOUT images (field "img" )should always be placed underneath array WITH images. After this there will be sorted on the amount of products (field count) in the array. Means I have to show sort array first on the basis of img then count. I am using usort( $childLinkCats, 'sortempty' );` function sortempty( $a, $b ) { return empty( $a['img'] ); } it will show array with image value above the one who contains null value. and to sort through count Im using usort($childLinkCats, "_sortByCount"); function _sortByCount($a, $b) { return strnatcmp($a['count'], $b['count']); } It will short by count But I am facing problem that only 1 working is working at a time, but I have to use both, please help me.

    Read the article

  • Regex to match pattern with subdomain in java gives issues

    - by Ramesh
    I am trying to match the sub domain of an url using http://([a-z0-9]*.)?example.com/.* which works perfectly for these cases. http://example.com/index.html http://test.example.com/index.html http://test1.example.com/index.html http://www.example.com/122/index.html But the problem is it matches for this URL too. http://www.test.com/?q=http://example.com/index.html if an URL with another domain has the URL in path it matches.Can any one tell me how to match for current domain only. getting the host will work but i need to match full URL.

    Read the article

  • ASP.NET web site running in IIS and hosting WCF service fails to get connections on the TCP server

    - by Salil
    I am using the combination of Silverlight client application along with ASP.NET web site running in IIS and hosting WCF service. This WCF service uses the library that starts a TCP server and and initiates requests to the connected TCP clients when the silverlight client application makes the WCF async requests. When I use this library in a local WPF application, the TCP server is able to receive client connection requests and I can get info from these clients. But when I use the same library from the implementation of the WCF service inside the ASP .NET web site project (+ Silverlight client), the server strangely does not receive any connection requests i.e. when I create TcpListener object and issue a start, nothing happens (nor an exception is generated). My setup is I am using the Ethernet for the Internet and Wi-Fi for the TCP clients. Is the WCF service getting confused because of this? Is there any special WCF settings I should put in for TcpListener.Start to work?

    Read the article

  • Does ASP.NET Make Request Scheduling Decisions Based Upon SessionID?

    - by Mike Murphy
    I know that a properly implemented SessionStateStoreProvider maintains an exclusive lock on session data for the duration of a request. However, considering that multiple requests could arrive simultaneously (e.g. via IFRAMEs) all but one would be able to make forward progress. All the other requests would block for a bit and reduce the number of worker threads available during that time. It seems if ASP.NET "peeked" at the session IDs on the requests early on, it could avoid running requests simultaneously that were on the same session. This would improve throughput under load for pages that didn't want to give up using IFRAMEs. This seems plausible enough that it might be true.

    Read the article

  • Redirecting image to php page.

    - by Searock
    Hi, Is it possible to redirect a user to a php page and then redirect to different image, if the user is requesting for the image ? For example if user requests for the image or if other website requests for the image, it should be redirected to the php page and then redirected to a different image. Like if other website requests for http://example.com/images/a.gif, the website will get a different image i.e. http://example.com/images/b.gif. Is it possible? Let me know if I am not clear with my problem. Thanks. Edit : I am trying to create avatar changer for a forum, but the problem is that I cannot add a php link to my avatar. So I think if I could add a image link and when the forum requests the image I could redirect it internally to a php page and then from the php page I would redirect it to a different image.

    Read the article

  • cURL/PHP Request Executes 50% of the Time

    - by makavelli
    After searching all over, I can't understand why cURL requests issued to a remote SSL-enabled host are successful only 50% or so of the time in my case. Here's the situation: I have a sequence of cURL requests, all of them issued to a HTTPS remote host, within a single PHP script that I run using the PHP CLI. Occasionally when I run the script the requests execute successfully, but for some reason most of the times I run it I get the following error from cURL: * About to connect() to www.virginia.edu port 443 (#0) * Trying 128.143.22.36... * connected * Connected to www.virginia.edu (128.143.22.36) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad record mac * Closing connection #0 If I try again a few times I get the same result, but then after a few tries the requests will go through successfully. Running the script after that again results in an error, and the pattern continues. Researching the error 'alert bad record mac' didn't give me anything helpful, and I hesitate to blame it on an SSL issue since the script still runs occasionally. I'm on Ubuntu Server 10.04, with php5 and php5-curl installed, as well as the latest version of openssl. In terms of cURL specific options, CURLOPT_SSL_VERIFYPEER is set to false, and both CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT are set to 4 seconds. Further illustrating this problem is the fact that the same exact situation occurs on my Mac OS X dev machine - the requests only go through ~50% of the time.

    Read the article

  • Extract and replace named group regex

    - by user557670
    I was able to extract href value of anchors in an html string. Now, what I want to achieve is extract the href value and replace this value with a new GUID. I need to return both the replaced html string and list of extracted href value and it's corresponding GUID. Thanks in advance. My existing code is like: Dim sPattern As String = "<a[^>]*href\s*=\s*((\""(?<URL>[^\""]*)\"")|(\'(?<URL>[^\']*)\')|(?<URL>[^\s]* ))" Dim matches As MatchCollection = Regex.Matches(html, sPattern, RegexOptions.IgnoreCase Or RegexOptions.IgnorePatternWhitespace) If Not IsNothing(matches) AndAlso matches.Count > 0 Then Dim urls As List(Of String) = New List(Of String) For Each m As Match In matches urls.Add(m.Groups("URL").Value) Next End If Sample HTML string: <html><body><a title="http://www.google.com" href="http://www.google.com">http://www.google.com</a><br /><a href="http://www.yahoo.com">http://www.yahoo.com</a><br /><a title="http://www.apple.com" href="http://www.apple.com">Apple</a></body></html>

    Read the article

  • Get REST Call is not returning the string I put in url

    - by wizage
    I have very simple code: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; namespace Calculator.Controllers { public class CalcController : ApiController { public string Get(string type) { return type; } } } And this is what it returns when I type in http://www.example.com/api/calc/test <string xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/2003/10/Serialization/" i:nil="true"/> When I use http://www.example.com/api/calc/?test=test it returns this: <string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">type</string> How to I make it so I can just do the top one instead of the bottom one?

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >