Search Results

Search found 20409 results on 817 pages for 'url routing'.

Page 344/817 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • PHP robots.txt parsing

    - by omfgroflmao
    Is there an easiest way to do this? function parse_robots_txt($URL){ $parsed = parse_url($URL); $robots = file_get_contents('http://'.$parsed['host'].'/robots.txt',FILE_TEXT); $exploded = explode('user-agent:',strtolower($robots)); foreach($exploded as $user_agent){ $user_agent = trim($user_agent); if(substr($user_agent,0,1) == '*'){ $user_agent = str_replace('#','',preg_replace('/#.*\\n/i','',$user_agent)); $user_agent = str_replace('disallow:','',substr($user_agent,1)); $user_agent = preg_replace('/allow:/i', '+-+-+-+', $user_agent, 1); $user_agent = str_replace('allow:','',$user_agent); print_r(explode('+-+-+-+',$user_agent)); } } }

    Read the article

  • Kohana3: Absolute path to a file

    - by Svish
    Say I have a file in my kohana 3 website called assets/somefile.jpg. I can get the url to that file by doing echo Url::site('assets/somefile.jpg'); // /kohana/assets/somefile.jpg Is there a way I can get the absolute path to that file? Like if I want to fopen it or get the size of the file or something like that. In other words, I would like to get something like /var/www/kohana/assets/somefile.jpg or W:\www\kohana\assets\somefile.jpg or whatever is the absolute path.

    Read the article

  • Qooxdoo REST JSON request problem - unexpected token and then timeout

    - by freiksenet
    Hello! I am learning Qooxdoo framework and I am trying to make it work with a small Django web service. Django webservice just returns JSON data like this: { "name": "Football", "description": "The most popular sport." } Then I use the following code to query that url: var req = new qx.io.remote.Request(url, "GET", "application/json"); req.toggleCrossDomain(); req.addListener("completed", function(e) { alert(e.getContent()); }); req.send(); Unfortunately when I execute the code I get unexpected token error and then request timeouts. Uncaught SyntaxError: Unexpected token : Native.js:91013011 qx.io.remote.RequestQueue[246]: Timeout: transport 248 Native.js:91013011 qx.io.remote.RequestQueue[246]: 5036ms > 5000ms Native.js:91013013 qx.io.remote.Exchange[248]: Timeout: implementation 249 JSLint reports that this is a valid JSON, so I wonder why Qooxdoo doesn't parse it correctly.

    Read the article

  • Web scraping: how to get scraper implementation from text link?

    - by isme
    I'm building a java web media-scraping application for extracting content from a variety of popular websites: youtube, facebook, rapidshare, and so on. The application will include a search capability to find content urls, but should also allow the user to paste a url into the application if they already where the media is. Youtube Downloader already does this for a variety of video sites. When the program is supplied with a URL, it decides which kind of scraper to use to get the content; for example, a youtube watch link returns a YoutubeScraper, a Facebook fanpage link returns a FacebookScraper and so on. Should I use the factory pattern to do this? My idea is that the factory has one public method. It takes a String argument representing a link, and returns a suitable implementation of the Scraper interface. I guess the Factory would hold a list of Scraper implementations, and would match the link against each Scraper until it finds a suitable one. If there is no suitable one, it throws an Exception instead.

    Read the article

  • How to add Category in DotClear blog with HttpWebRequest or MetaWeblog API

    - by Pitming
    I'm trying to create/modify dotclear blogs. For most of the options, i use XmlRpc API (DotClear.MetaWeblog). But didn't find any way to handle categories. So I start to look at the Http packet and try to do "the same as the browser". Here si the method I use to "Http POST" protected HttpStatusCode HttpPost(Uri url_, string data_, bool allowAutoRedirect_) { HttpWebRequest Request; HttpWebResponse Response = null; Stream ResponseStream = null; Request = (System.Net.HttpWebRequest)HttpWebRequest.Create(url_); Request.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; fr; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 (.NET CLR 3.5.30729)"; Request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; Request.AllowAutoRedirect = allowAutoRedirect_; // Add the network credentials to the request. Request.Credentials = new NetworkCredential(Username, Password); string authInfo = Username + ":" + Password; authInfo = Convert.ToBase64String(Encoding.Default.GetBytes(authInfo)); Request.Headers["Authorization"] = "Basic " + authInfo; Request.Method = "POST"; Request.CookieContainer = Cookies; if(ConnectionCookie!=null) Request.CookieContainer.Add(url_, ConnectionCookie); if (dcAdminCookie != null) Request.CookieContainer.Add(url_, dcAdminCookie); Request.PreAuthenticate = true; ASCIIEncoding encoding = new ASCIIEncoding(); string postData = data_; byte[] data = encoding.GetBytes(postData); //Encoding.UTF8.GetBytes(data_); //encoding.GetBytes(postData); Request.ContentLength = data.Length; Request.ContentType = "application/x-www-form-urlencoded"; Stream newStream = Request.GetRequestStream(); // Send the data. newStream.Write(data, 0, data.Length); newStream.Close(); try { // get the response from the server. Response = (HttpWebResponse)Request.GetResponse(); if (!allowAutoRedirect_) { foreach (Cookie c in Response.Cookies) { if (c.Name == "dcxd") ConnectionCookie = c; if (c.Name == "dc_admin") dcAdminCookie = c; } Cookies.Add(Response.Cookies); } // Get the response stream. ResponseStream = Response.GetResponseStream(); // Pipes the stream to a higher level stream reader with the required encoding format. StreamReader readStream = new StreamReader(ResponseStream, Encoding.UTF8); string result = readStream.ReadToEnd(); if (Request.RequestUri == Response.ResponseUri) { _log.InfoFormat("{0} ==&gt; {1}({2})", Request.RequestUri, Response.StatusCode, Response.StatusDescription); } else { _log.WarnFormat("RequestUri:{0}\r\nResponseUri:{1}\r\nstatus code:{2} Status descr:{3}", Request.RequestUri, Response.ResponseUri, Response.StatusCode, Response.StatusDescription); } } catch (WebException wex) { Response = wex.Response as HttpWebResponse; if (Response != null) { _log.ErrorFormat("{0} ==&gt; {1}({2})", Request.RequestUri, Response.StatusCode, Response.StatusDescription); } Request.Abort(); } finally { if (Response != null) { // Releases the resources of the response. Response.Close(); } } if(Response !=null) return Response.StatusCode; return HttpStatusCode.Ambiguous; } So the first thing to do is to Authenticate as admin. Here is the code: protected bool HttpAuthenticate() { Uri u = new Uri(this.Url); Uri url = new Uri(string.Format("{0}/admin/auth.php", u.GetLeftPart(UriPartial.Authority))); string data = string.Format("user_id={0}&user_pwd={1}&user_remember=1", Username, Password); var ret = HttpPost(url,data,false); return (ret == HttpStatusCode.OK || ret==HttpStatusCode.Found); } 3.Now that I'm authenticate, i need to get a xd_chek info (that i can find on the page so basically it's a GET on /admin/category.php + Regex("dotclear[.]nonce = '(.*)'")) 4.so I'm authenticate and have the xd_check info. The last thing to do seems to post the next category. But of course it does not work at all... here is the code: string postData = string.Format("cat_title={0}&new_cat_parent={1}&xd_check={2}", category_, 0, xdCheck); HttpPost(url, postData, true); If anyone can help me and explain were is it wrong ? thanks in advance.

    Read the article

  • LIKE operator with $variable

    - by skarama
    This is my first question here and I hope it is simple enough to get a quick answer! Basically, I have the following code: $variable = curPageURL(); $query = 'SELECT * FROM `tablename` WHERE `columnname` LIKE '$variable' ; If I echo the $variable, it prints the current page's url( which is a javascript on my page) Ultimately, what I want, is to be able to make a search for which the search-term is the current page's url, with wildcards before and after. I am not sure if this is possible at all, or if I simply have a syntax error, because I get no errors, simply no result! I tried : $query = 'SELECT * FROM `tablename` WHERE `columnname` LIKE '"echo $variable" ' ; But again, I'm probably missing or using a misplaced ' " ; etc. Please tell me what I'm doing wrong!

    Read the article

  • how to run fastcgi

    - by joels
    I have fastcgi installed and running. I downloaded a developerkit from fastcgi.com. It had some examples in it. One of the example files echos some stuff. It required a .libs and a .deps I put those folders along with a echo.fcgi file and into the webroot/cgi-bin. If I got to the echo.fcgi url, it works great. I created a simple c file that prints hello world. I compile it using gcc -Wall -o main -lfcgi main.c What do I do with it now? Does it require something like a perl script or php script to be executed. Or, should I just be able to put it in the webroot/cgi-bin folder and go to it's url?

    Read the article

  • asp.net mvc 2 web application inside a Web site?

    - by Amitabh
    I have a Asp.Net Web Site deployed as a WebSite inside IIS 7.5. http://localhost/WebSite Then I have a second Asp.Net MVC 2 web application which is deployed as Sub Application inside the above WebSite. So the mvc aplication should work on the following Url. http://localhost/WebSite/MvcApp/ The web site works fine but when I browse the mvc Url http://localhost/WebSite/MvcApp/ It gives following error. HTTP Error 403.14 - Forbidden The Web server is configured to not list the contents of this directory.

    Read the article

  • Cron job for list of urls

    - by mathew
    Duplicate http://stackoverflow.com/questions/2968918/how-do-i-do-cron-job-for-list-of-urls-closed sorry for the confusion... Ho do I do cron job using a list of urls?? actually the search path is mention below www.mydomain.com/search?url=www.google.com after google.com another url which is in the urllist.txt should be the next. so at the end of the day the whole list of urls in urllist.txt will be completed and saved in my database. what I need is how do I array urls in search path from urllist.txt.....and put it in cron job rest I can handle Thanks Mathew

    Read the article

  • How can I create and manage a multi-tenant ASP MVC application

    - by Wizzarding
    Hi, I want to create a multi-tenant application that uses the hostname to determine the customer. For example: CustomerOne.myapp.com AnotherCo.myapp.com AndOneMore.myapp.com ... I can do the database and security side with no problems, I can also get the hostname from the URL, but what I am struggling to find out is how to create the basic plumbing that would allow a new customer to sign up online, provide their company name, and for the application to create the new URL, ready to be used straight away. Can anyone help? Thanks, Rob.

    Read the article

  • Is there a more efficient way to get the number of search results from a google query?

    - by highone
    Right now I am using this code: string url = "http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=hey&esrch=FT1"; string source = getPageSource(url); string[] stringSeparators = new string[] { "<b>", "</b>" }; string[] b = source.Split(stringSeparators, StringSplitOptions.None); bool isResultNum = false; foreach (string s in b) { if (isResultNum) { MessageBox.Show(s.Replace(",", "")); return; } if (s.Contains(" of about ")) { isResultNum = true; } } Unfortunately it is very slow, is there a better way to do it? Also is it legal to query google like this? From the answer in this question it didn't sound like it was http://stackoverflow.com/questions/903747/how-to-download-google-search-results

    Read the article

  • iPhone Development: Get images from RSS feed

    - by Matthew Saeger
    I am using the NSXMLParser to get new RSS stories from a feed and am displaying them in a UITableView. However now I want to take ONLY the images, and display them in a UIScrollView/UIImageView (3 images side-by side). I am completely lost. I am using the following code to obtain 1 image from a URL. NSURL *theUrl1=[NSURL URLWithString:@"http://farm3.static.flickr.com/2586/4072164719_0fa5695f59.jpg"]; JImage *photoImage1=[[JImage alloc] init]; [photoImage1 setContentMode:UIViewContentModeScaleAspectFill]; [photoImage1 setFrame:CGRectMake(0, 0, 320, 170)]; [photoImage1 initWithImageAtURL:theUrl1]; [imageView1 addSubview:photoImage1]; [photoImage1 release]; This is all I have accomplished, and it works, for one image, and I have to specify the exact URL. What would you recommend I do to accomplish this?

    Read the article

  • Webfaction: How do I run a Static/Perl app and Django app under the same website

    - by swisstony
    I have an existing Perl app that I'm moving to a Webfaction website. I will be adding Django apps to this Webfaction website too. I would like the Django app to get first call and so would want its URL path to be / This would allow me to add any new URLs to the urls.py I wish as my app grows. If the URL doesn't match anything in the urls.py I would like it to get passed to the static Perl app. For example /app1 - Django /app2 - Django Everything else not picked up by urls.py I would want going to my Perl app For example: /index.html - Static/Perl app /about.html - Static/Perl app /contact.html - Static/Perl app /apps/perlapp1.cgi - Static/Perl app etc How do I go about achieving this in Webfaction?

    Read the article

  • Reporting Services as PDF through WebRequest in C# 3.5 "Not Supported File Type"

    - by Heath Allison
    I've inherited a legacy application that is supposed to grab an on the fly pdf from a reporting services server. Everything works fine up until the point where you try to open the pdf being returned and adobe acrobat tells you: Adobe Reader could not open 'thisStoopidReport'.pdf' because it is either not a supported file type or because the file has been damaged(for example, it was sent as an email attachment and wasn't correctly decoded). I've done some initial troubleshooting on this. If I replace the url in the WebRequest.Create() call with a valid pdf file on my local machine ie: @"C:temp/validpdf.pdf") then I get a valid PDF. The report itself seems to work fine. If I manually type the URL to the reporting services report that should generate the pdf file I am prompted for user authentication. But after supplying it I get a valid pdf file. I've replace the actual url,username,userpass and domain strings in the code below with bogus values for obvious reasons. WebRequest request = WebRequest.Create(@"http://x.x.x.x/reportServer?/reports/reportNam&rs:format=pdf&rs:command=render&rc:parameters=blahblahblah"); int totalSize = 0; request.Credentials = new NetworkCredential("validUser", "validPass", "validDomain"); request.Timeout = 360000; // 6 minutes in milliseconds. request.Method = WebRequestMethods.Http.Post; request.ContentLength = 0; WebResponse response = request.GetResponse(); Response.Clear(); BinaryReader reader = new BinaryReader(response.GetResponseStream()); Byte[] buffer = new byte[2048]; int count = reader.Read(buffer, 0, 2048); while (count > 0) { totalSize += count; Response.OutputStream.Write(buffer, 0, count); count = reader.Read(buffer, 0, 2048); } Response.ContentType = "application/pdf"; Response.Cache.SetCacheability(HttpCacheability.Private); Response.CacheControl = "private"; Response.Expires = 30; Response.AddHeader("Content-Disposition", "attachment; filename=thisStoopidReport.pdf"); Response.AddHeader("Content-Length", totalSize.ToString()); reader.Close(); Response.Flush(); Response.End();

    Read the article

  • Coldfusion: download PDF

    - by dmr
    I have a URL that opens a PDF: <cfoutput>http://myUrl.cfm?params=#many#<cfoutput> I would like to enable my users to download that PDF instead of having it open in the browser. I've been trying the following, and it isn't working: <cfoutput> <cfcontent type="application/pdf" file="http://myUrl.cfm?params=#many#"/> <cfheader name="content-diposition" value="attachment; filename='http://myUrl.cfm?params=#many#'"> <cflocation url= "http://myUrl.cfm?params=#many#"/> </cfoutput> What am I doing wrong?

    Read the article

  • is there a way using Ruby's net/http to post form data to an http proxy?

    - by Derek P.
    I have a basic Squid server setup and I am trying to use Ruby's Net::HTTP::Proxy class to send a POST of form data to a specified HTTP endpoint. I assumed I could do the following: Net::HTTP::Proxy(my_host, my_port).start(url.host) do |h| req = Net::HTTP::Post.new(url.path) req.form_data = { "xml" => xml } h.request(req) end But, alas, proxy vs. non-proxied Net::HTTP classes don't seem to use the proxy IP Address. my remote service responds telling me that it received a request from the wrong IP address, ie: not the proxy. I am looking for a specific way to write the procedure, so that I can successfully send a form post via a proxy. Help? :)

    Read the article

  • IIS 7: Redirect all request to Default.aspx

    - by EtienneT
    We want to redirect all request in an ASP.NET site to ~/Default.aspx to close the site. We are using IIS7. The site has paths like this that return a page: http://test.com/operating We are using url rewriting. We want requests similar to those to be redirected to ~/Default.aspx http://test.com// http://test.com/.aspx http://test.com//.aspx We would normaly use something like this in web.config: <customErrors mode="On" defaultRedirect="Default.aspx"> <error statusCode="404" redirect="Default.aspx" /> </customErrors> The problem with this is that it won't redirect folder url like this http://test.com/*/ Thanks!

    Read the article

  • Good error flagging

    - by Wayne
    For everyone that was thinking of the error_reporting() function, then it isn't, what I need is whenever a MySQL query has been done, and the statement has like if($result) { echo "Yes, it was fine... bla bla"; } else { echo "Obviously, the echo'ing will show in a white page with the text ONLY..."; } Whenever statements have been true or false, I want the error to be appeared when redirected with the header() function and echo the error reporting in a div somewhere on the page. Basically something like this: $error = ''; This part appears inside the div tags <div><?php echo $error; ?></div> So the error part will be echoed when redirected with the header() if($result) { $error = "Yes, it was fine... bla bla"; header("Location: url"); } else { $error = "Something wrong happened..."; header("Location: url"); } But that just doesn't work :(

    Read the article

  • Using VRaptor3 with Tomcat UnpackWARs property setted to false

    - by VinTem
    Hi there, I have to deploy my web app to a tomcat container with the unpackWARs property defined to false. When I do that, although the application is successfully deployed, when I try to access my url I always got a 404 error. I just don't receive that error when I try to access a direct file like index.html for instance. But I can't do that, the VRaptor framework is responsible for routing my url to the jsp file. Does anyone know if I have do anything else? Thanx

    Read the article

  • Provisioning api jpa

    - by user268515
    Hi i tried the following code Appsprovisioning.java public void calluser() throws AppsForYourDomainExceptiion IOException, { for(UserEntry userEntry : retrieveAllUsers().getEntries()) { m[x]= userEntry.getTitle().getPlainText(); x++; } try { for(int i=0;i<x;i++) { String sd=m[i]; stud greeting1 = new stud(sd); em.persist(greeting1); System.out.println("jk"); } } finally { em.close(); } public UserFeed retrieveAllUsers()throws ,ServiceException, IOException{ userService = new UserService("Myapplication"); userService.setUserCredentials("[email protected]","xxxxxxxx"); URL retrieveUrl = new URL("https://www.google.com/a/feeds/montfortperungudi.edu.in/user/2.0/"); UserFeed allUsers = new UserFeed(); UserFeed currentPage; Link nextLink; do { currentPage = userService.getFeed(retrieveUrl, UserFeed.class); allUsers.getEntries().addAll(currentPage.getEntries()); nextLink = currentPage.getLink(Link.Rel.NEXT, Link.Type.ATOM); if (nextLink != null) { retrieveUrl = new URL(nextLink.getHref()); } } while (nextLink != null); return allUsers; } } Servlet.java public class servlet extends HttpServlet { private static final long serialVersionUID = 1L; private static final Logger log = Logger.getLogger(servlet.class.getName()); // EntityManager em=null; AppsProvisioning aa=new AppsProvisioning(); public void doGet(HttpServletRequest req, HttpServletResponse resp)throws IOException { //em = EMFService.get().createEntityManager(); try { aa.calluser(); }catch(Exception e){ System.out.println("SEF "+e);} finally { // em.clear(); // em.close(); } } } Table Creation import javax.persistence.Entity; import javax.persistence.Id; @Entity(name="stud") public class stud { @Id private String fathername; public stud(String fathername) {// TODO Auto-generated constructor stub this.fathername=fathername; } public void setFathername(String fathername) { this.fathername = fathername; } public String getFathername() { return fathername; } } I cant able to store all the users in the table.. Its returning Session out error.

    Read the article

  • Problem requesting a HTTPS with TCL

    - by Javier
    Hi Everybody, I'm trying to do the following request using TCL (OpenACS) http::register https 443 tls::socket set url "https://encrypted.google.com" set token [http::geturl $url -timeout 30000] set status [http::status $token] set answer [http::data $token] http::cleanup $token http::unregister https The problem is that when I read the $status variable I get "eof" and the $answer variable becomes empty. I tried enabling tls V.1 http::register https 443 [list tls::socket -tls1 1] and it works only for the site https://www.galileo.edu, but not for https://encrypted.google.com. The site what I'm trying to connect is https://graph.facebook.com/me/feed?access_token=... but it doesn't work. I used curl to retrieve the contents of the pages in HTTPS and it works, I have installed OpenSSL, so I can't see the problem, there is another way to do HTTPS connections with TCL?. I can't see if this is a problem of coding (maybe I'm registered wrong the https protocol) or maybe It is a bad configuration of my server. Hope somebody helps!! Thanks!

    Read the article

  • ClickOnce: How do I pass a querystring value to my app *through the installer*?

    - by Timothy Khouri
    My company currently builds separate MSI's for all of our clients, even though the app is 100% the same across the board (with a single exception, an ID in the app.config). I would like to show them that we can publish in once place with ClickOnce, and simply add a query string parameter for each client's installer. Example: http://mysite.com/setup.exe?ID=1234-56-7890 The issue that I'm having is that the above ("ID=1234...") is not being passed along to the "myapplication.application". What is happening instead is, the app is being installed successfully, and it is running the first time with an activation context, but the "ActivationUri" does not contain any query string values. Is there a way to pass query string values FROM THE INSTALLER URL to the application's launch URL? If so, how?

    Read the article

  • How do you link to an action that takes an array as a parameter (RedirectToAction and/or ActionLink)

    - by Andrew
    I have an action defined like so: public ActionResult Foo(int[] bar) { ... } Url's like this will work as expected: .../Controller/Foo?bar=1&bar=3&bar=5 I have another action that does some work and then redirects to the Foo action above for some computed values of bar. Is there a simple way of specifying the route values with RedirectToAction or ActionLink so that the url's get generated like the above example? These don't seem to work: return RedirectToAction("Foo", new { bar = new[] { 1, 3, 5 } }); return RedirectToAction("Foo", new[] { 1, 3, 5 }); <%= Html.ActionLink("Foo", "Foo", new { bar = new[] { 1, 3, 5 } }) %> <%= Html.ActionLink("Foo", "Foo", new[] { 1, 3, 5 }) %> However, for a single item in the array, these do work: return RedirectToAction("Foo", new { bar = 1 }); <%= Html.ActionLink("Foo", "Foo", new { bar = 1 }) %> When setting bar to an array, it redirects to the following: .../Controller/Foo?a=System.Int32[] Finally, this is with ASP.NET MVC 2 RC. Thanks.

    Read the article

  • Drawbacks to using background-repeat only for colors?

    - by ineedtosleep
    So I need some custom colors on a layout, but I'm looking for a better way of doing it other than just slapping a giant picture with (background: url(something.jpg)) in the layout. Mostly I'm thinking of getting a color palette (i.e. from Adobe Kuler, colourlovers, etc.), getting a 5x5 sample of each color and sticking them in an array for CSS sprites or just as separate files and accessing them through: .color-one {transparent url(./one.gif) repeat} and just reusing that whenever I'd like to use the color. Are there any drawbacks to doing it this way? And if there are should I just stick with web-safe colors or is there a better way of doing this?

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >