Search Results

Search found 21550 results on 862 pages for 'www jacob'.

Page 141/862 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Multiple robots.txt for subdomains in rails

    - by Christopher
    I have a site with multiple subdomains and I want the named subdomains robots.txt to be different from the www one. I tried to use .htaccess, but the FastCGI doesn't look at it. So, I was trying to set up routes, but it doesn't seem that you can't do a direct rewrite since every routes needs a controller: map.connect '/robots.txt', :controller => ?, :path => '/robots.www.txt', :conditions => { :subdomain => 'www' } map.connect '/robots.txt', :controller => ?, :path => '/robots.club.txt' What would be the best way to approach this problem? (I am using the request_routing plugin for subdomains)

    Read the article

  • C# web request with POST encoding question

    - by rlandster
    On the MSDN site there is an example of some C# code that shows how to make a web request with POST'ed data. Here is an excerpt of that code: WebRequest request = WebRequest.Create ("http://www.contoso.com/PostAccepter.aspx "); request.Method = "POST"; string postData = "This is a test that posts this string to a Web server."; byte[] byteArray = Encoding.UTF8.GetBytes (postData); // (*) request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = byteArray.Length; Stream dataStream = request.GetRequestStream (); dataStream.Write (byteArray, 0, byteArray.Length); dataStream.Close (); WebResponse response = request.GetResponse (); ...more... The line marked (*) is the line that puzzles me. Shouldn't the data be encoded using the UrlEncode function than UTF8? Isn't that what application/x-www-form-urlencoded implies?

    Read the article

  • More HtAccess Rewrite Rules

    - by pws5068
    Greetings all, I need help combining some htaccess rewrites, these crazy regular expressions screw with my head. So I have a folder structure something like this: /www/mysite.com/page/member/friends.php /www/mysite.com/page/video/videos.php /www/mysite.com/page/messages/inbox.php The URLs get rewritten to this: mysite.com/member/friends.php mysite.com/video/videos.php mysite.com/messages/inbox.php (Notice the /page/ folder is hidden in the url, but I keep it on the server for better file organization) The rewrite rules look something like this: (I'm new so correct me if they are flawed) RewriteRule ^video/(.*)$ /page/video/$1 [NC] RewriteRule ^member/(.*)$ /page/member/$1 [NC] RewriteRule ^messages/(.*)$ /page/messages/$1 [NC] Now, I also need to do a completely different rewrite to a file called lobby.php inside of the member folder: After the original rewrites, a sample url looks like: mysite.com/member/lobby.php?member=pws5068 I need a new rewrite to make it look like this: mysite.com/pws5068 Thank you for bearing with my super-long question here. How can I make this happen?

    Read the article

  • jQuery .each or search function, how can I make use of those?

    - by Noor
    I have a ul list, with 10 items, and I have a label that displays the selected li text. When I click a button, I need to check the label, versus all the list items to find the matching one, when it finds that I need to assign the corresponding number. I.e. list: Messi Cristiano Zlatan hidden values of list items: www.messi.com www.cronaldo.com www.ibra.com label: Zlatan script (thought)procces: get label text, search list for matching string, get the value of that string. (and if someone could point me in a direction to learn these basic(?) stuff. tried to be as specific as possible, thanks guys!

    Read the article

  • Read Values from xml file

    - by Nimesh
    I have a function TRANSLATE which reads value from the xml file based on a key. I wanna put "http://www.google.com?search=" in the xml file and read it based on the key(SEARCHER) I am confused in framing the link when it comes in Response.Write <%Dim SearchQuery1 SearchQuery1="New" Dim SearchQuery2 SearchQuery2=30 Response.Write("<A HREF=""http://www.google.com?search="&SearchQuery1&"-"&SearchQuery2&""" TARGET=""links"">http://www.google.com?search="&SearchQuery1&"-"&SearchQuery2&"</A>") %> I was trying something like this Response.Write("<A HREF=""Translate("SEARCHER")"&SearchQuery1&"-"&SearchQuery2&""" TARGET=""links"">Translate("SEARCHER")"&SearchQuery1&"-"&SearchQuery2&"</A>") but is throwing some error: Expected)' PLs let me know how can i solve this????

    Read the article

  • Should I use .pl or .cgi for Perl web script files?

    - by Nano HE
    HI. I created two files 'hello.pl' and 'hello.cgi' with the code below. #!/usr/bin/perl print "Content-type:text/html\n\n"; print "hello world"; I can view the page via both http://www.mydomain.com/cgi-bin/hello.pl and http://www.mydomain.com/cgi-bin/hello.cgi. Which one is more sense in Perl web dev? BTW, the directory of 'cgi-bin' created by my VPS server, Do I need contact with my VPS support to remove it or just remain it like this URL style? Maybe http://www.mydomain.com/perDev/hello.cgi is better?

    Read the article

  • What is the simplest way to generate domain specific url from application path..?

    - by harsh
    I have application specific url like below ~/Default.aspx ~/Manage/Page.aspx ~/Manage/Account/Default.aspx I really don't know what are these kind of paths actually called. Now I need them to convert to domain specific complete URL. No ../ or ../../ like thing in the URL. I want URLs like http://www.example.com/Default.aspx http://www.example.com/Manage/Page.aspx http://www.example.com/Manage/Account/Default.aspx Currently I am doing this following way (assuming I have HttpRequest object) Request.Url.Host + path.Substring(1); Is there a more simplest way to achieve this..?

    Read the article

  • Security Exception while running sites using subdomain?

    - by lmenaria
    I have 3 sites : media.lmenaria.com - Hosting Images webservice.lmenaria.com - Sending images url from database. www.lmenaria.com - Host Silverlight application and display images. When I run page "http://www.lmenaria.com/silverlight.aspx". I am getting below exception. So what shpould I do ? System.Security.SecurityException: Security error. at System.Net.Browser.BrowserHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult) at System.Net.Browser.BrowserHttpWebRequest.<c_DisplayClass5.b_4(Object sendState) at System.Net.Browser.AsyncHelper.<c_DisplayClass2.b_0(Object sendState) at System.Net.Browser.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state) at System.Net.Browser.BrowserHttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at System.Net.WebClient.GetWebResponse(WebRequest request, IAsyncResult result) at System.Net.WebClient.OpenReadAsyncCallback(IAsyncResult result) I think, my all sites runing at same domain, so I don't need crossdomain xmls. Please let me know how Can I fix it. I have tried to put corssdoamin xml media.lmenaria.com,webservice.lmenaria.com both, and working fine, but only at www.lmenaria.com not working. We are downloading images using WebClient. Thanks in advance, Laxmilal Menaria

    Read the article

  • Mod_Rewrite: Testing URL got indexed in Google - How do I create a proper 301 redirect?

    - by Jonathan Wold
    I worked on a website for which I had a "development URL" that looked something like this: www.domainname.com.php5-9.dfw1-2.websitetestlink.com/ Now, several weeks after the website launch, there is at least one page of content indexed on Google with that URL. Question: How do I redirect all requests from that test URL to reroute to the actual domain? So, for instance, I would want: www.domainname.com.php5-9.dfw1-2.websitetestlink.com/page-name To go to: www.domainname.com/page-name The website is powered by WordPress and hosted on a PHP server. I've experimented with .htaccess without much success.

    Read the article

  • Why does Java force user-agent through simple Socket IO?

    - by Zombies
    I am using nothing but raw Socket IO. There isn't one HttpURLConnection nor any http client libs in my project. When I run it through wireshark I see somethign very revealing: GET / HTTP/1.1 User-Agent: Java/1.6.0_15 Host: www.google.com Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2 Connection: keep-alive Here is the crazy part, I never put ANY of that in my original request. My original request was: "GET http://www.google.com/ HTTP/1.1\r\n" + "Host: www.google.com\r\n" + "User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.8) Gecko/20100214 Ubuntu/9.10 (karmic) Firefox/3.5.8\r\n" + "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n" + "Accept-Language: en-us,en;q=0.5\r\n" + "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n" + "Keep-Alive: 300\r\n" + "\r\n"; I am using the default Sun JVM.

    Read the article

  • Nginx Joomla Internationalization URL rewriting

    - by cl3m
    I'm using Joomla in combination with Nginx, and I'm currently trying to achieve some URL rewriting for a website that has several langages supported (italian, french, chinese, and deutch) The urls have the country code after the domain name, like so : http://www.example.com/fr/test/test.html or http://www.example.com/de/test/test.html I'm looking to rewrite the urls so the country code is part of the subdomain : so http://www.example.com/fr/test/test.html becomes http://fr.example.com/test/test.html Is there a way to achieve this with Nginx or should I look into a third party extension for Joomla (not my favorite choice). Thanks !!

    Read the article

  • Deploying with Capistrano & Subversion. Working copy locked

    - by Rimian
    I'm deploying to a Debian server with Capistrano which fails due to locked a working copy. I narrowed it down to this: svn checkout http://myrepo.net/mysite/tags/1.0 /var/www/mysite/releases/1234 So if I run: cap invoke COMMAND='svn checkout http://myrepo.net/mysite/tags/1.0 /var/www/mysite/releases/1234' I get an error: svn: Working copy '/var/www/mysite/releases/1' locked Clean up makes no difference. The same command runs fine from the server. When I list the files in 1234/ I can see all the .svn and working copy files. Can someone please point me in the right direction to resolve this? How do I tell if the working copy is really locked? svn status shows nothing...

    Read the article

  • Installed Redmine on Ubuntu; But i have no clue how to use it to create Users/Projects/Roles/Tracking etc.....

    - by Ronnie
    Hi all, Im new to Redmine. I installed redmine(with mysql) on Ubuntu 10.04. The following were the installation steps i did: $ sudo apt-get install redmine redmine-mysql subversion $ ln -s /usr/share/redmine/public /var/www/redmine In /etc/apache2/mods-available/passenger.conf, added a PassengerDefaultUser www-data directive. Configured the /var/www/redmine location in /etc/apache2/sites-available/default: RailsBaseURI /redmine PassengerResolveSymlinksInDocumentRoot on $ sudo a2enmod passenger I then restarted the apache2 server. Thats it. Now i typed http://localhost/redmine/ in my browser and accessed my redmine instance. So from here on, how do i create different users with with different privileges, create different projects, also update the issues and other project management related stuff..... I know this sounds silly, but i couldnt find anythin to proceed....

    Read the article

  • Login control doesnt work in Internet Explorer

    - by kamiar3001
    I use asp.net cookie in my application here is my web config : <authentication mode="Forms"> <forms path="/" defaultUrl="Default.aspx" loginUrl="Login.aspx" name=".ASPXAUTH" slidingExpiration="true" timeout="3000" domain="www.mysite.com" cookieless="UseDeviceProfile"/> </authentication> it works fine but I have a problem, after some days when a user has been working with the site application, suddenly my login control didn't work. I found out it will work after deleting temporary files. Edit : Please pay attention to domain when User request www.mysite.com every thing is okay but without "www" login doesn't work. in firefox they are working very good. this is IE problem. How I can solve this ?

    Read the article

  • Is it possible to map a root domain URL to a Grails' controller?

    - by firnnauriel
    Let's have an example: A grails project, myproj, is deployed in Tomcat 6. It can be accessed anywhere thru this link: http://www.mycompany.com/myproj. Let's say we purchase another domain, http://newcompany.com, and we would like to point it to http://www.mycompany.com/myproj/url. If I go to http://newcompany.com/12345, it should be the same as doing http://www.mycompany.com/myproj/url/12345. Can anyone tell me if this is possible? How to implement it (change Tomcat 6 config, add code in UrlMappings.groovy)? Thanks in advance.

    Read the article

  • [C++] Wrong EOF when unzipping binary file

    - by djzmo
    Hello there, I tried to unzip a binary file to a membuf from a zip archive using Lucian Wischik's Zip Utils: http://www.wischik.com/lu/programmer/zip_utils.html http://www.codeproject.com/KB/files/zip_utils.aspx FindZipItem(hz, filename.c_str(), true, &j, &ze); char *content = new char[ze.unc_size]; UnzipItem(hz, j, content, ze.unc_size); delete[] content; But it didn't unzip the file correctly. It stopped at the first 0x00 of the file. For example when I unzip an MP3 file, it will only unzip the first 4 bytes: 0x49443303 (ID3\0) because the 5th to 8th byte is 0x00000000. I also tried to capture the ZR_RESULT, and it always return ZR_OK (which means completed without errors). I think this guy also had the same problem, but no one replied to his question: http://www.codeproject.com/KB/files/zip_utils.aspx?msg=2876222#xx2876222xx Any kind of help would be appreciated :)

    Read the article

  • Routing WCF Traffic Based on URI Domain Requested

    - by Ian Patrick Hughes
    Is there a way to route traffic to a target WCF service file based on the URL domain requested? Basically, I have a single WCF RESTful services project with 3 service files offering different endpoints. It's hosted on a single IIS6 site looking for multiple host header values on port 80. I want to route traffic to different services files whether the requester is asking for www.site1.com, www.site2.com, or www.site3.com. Seems like the sort of thing I would use a global.asax or HTTP Handler for, but I am not sure since this is a regular WCF Service Application. Even though I am on IIS6 for this project, I don't mind using a URL re-writer and wildcard mapping, if I have to. I have admin rights on the balanced servers where this will reside, I just want to know if there is a common/best practice before I start hacking my way around this.

    Read the article

  • Force php through the .net engine in iis7

    - by Rippo
    I have converted a php to asp.net mvc and have it hosted with the Rackspace cloud. All works great apart from some php links are still linked from other sites and within search engines. My question is what do I need to add to my web.config to force php sites to go through the .net engine? These links work as expected as I can catch the 404 and redirect where need be:- http://www.securahome.net/myjunk.info http://www.securahome.net/myjunk.phpp However this one doesn't:- http://www.securahome.net/myjunk.php I have spoken to Rackspace cloud and they say "its not possible as IIS doesn't recognize php files. You can setup mime types to handle them" This however makes no sense and I think they did not understand the problem. Does anyone have a solution?

    Read the article

  • .pl or .cgi for perl web script file

    - by Nano HE
    HI. I created two files 'hello.pl' and 'hello.cgi' with the code below. #!/usr/bin/perl print "Content-type:text/html\n\n"; print "hello world"; I can view the page via both http://www.mydomain.com/cgi-bin/hello.pl and http://www.mydomain.com/cgi-bin/hello.cgi. Which one is more sense in Perl web dev? BTW, the directory of 'cgi-bin' created by my VPS server, Do I need contact with my VPS support to remove it or just remain it like this URL style? Maybe http://www.mydomain.com/perDev/hello.cgi is better?

    Read the article

  • Using JavaScript to change the URL used when a page is bookmarked...

    - by user30997
    JavaScript doesn't allow you to update window.location without triggering a reload. While I agree with this policy in principle (it shouldn't be possible to visit my website and have JavaScript change the location bar to read www.yourbankingsite.com,) I believe that it should be possible to change www.foo.org/index to www.foo.org/help. The only reason I care about this is for bookmarking. I'm working on a photo browser, and when a user is previewing a particular image, I want that image to be the default if they should bookmark that page. For example, if they are viewing foo.org/preview/images0-30 and they click on image #15, that image is expanded to a medium-sized view. If they then bookmark the page, I want the bookmark URL to be foo.org/preview/images0-30/active15. Any thoughts, or is there a security barrier on this one as well? I can certainly understand the same policy being applied here, but one can dream.

    Read the article

  • How to accomplish "AuthType None" in Apache 2.2

    - by Technorati
    http://httpd.apache.org/docs/trunk/mod/mod_authn_core.html#authtype talks about "AuthType None", and has an awesome example of exactly what I need to do - unfortunately, it appears to be new to 2.3/2.4. Is there any equivalent feature in 2.2? The authentication type None disables authentication. When authentication is enabled, it is normally inherited by each subsequent configuration section, unless a different authentication type is specified. If no authentication is desired for a subsection of an authenticated section, the authentication type None may be used; in the following example, clients may access the /www/docs/public directory without authenticating: <Directory /www/docs> AuthType Basic AuthName Documents AuthBasicProvider file AuthUserFile /usr/local/apache/passwd/passwords Require valid-user </Directory> <Directory /www/docs/public> AuthType None Require all granted </Directory>

    Read the article

  • regexp for detect that the url doesn´t end with an extension

    - by devnieL
    Hello. I'm using this regular expression for detect if an url ends with a jpg : var exp = /(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|]*^\.jpg)/ig; it detects the url : e.g. http://www.blabla.com/sdsd.jpg but now i want to detect that the url doesn't ends with an jpg extension, i try with this : var exp = /(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|]*[^\.jpg]\b)/ig; but only get http://www.blabla.com/sdsd then i used this : var exp = /(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|]*[^\.jpg]$)/ig; it works if the url is alone, but dont work if the text is e.g. : http://www.blabla.com/sdsd.jpg text

    Read the article

  • Deny http access to a directory, allow access from WordPress plugin

    - by luke
    Hey. I need to prevent direct access to http://www.site.com/wp-content/uploads/folder/something.pdf through the browser. However the Download Monitor plugin I am using, which allows logged in users to download the file, needs to be able to work. Trying Order Allow,Deny Deny from all Allow from all but the download links do not now work... even though (I think) they are links produced by the script e.g. http://www.site.com/wp-content/plugins/download-monitor/download.php?id=something.pdf Enter that in the address bar and you correctly get a WordPress message, 'You must be logged in to download this file.' However, if someone knows the URL where the file was uploaded http://www.site.com/wp-content/uploads/folder/something.pdf they can still access it directly. I don't know how (guesswork?) they would find the direct URL anyway, but the client wants it stopped! Thanks for any help.

    Read the article

  • How do I set up for sharing code (ASP.NET) across multiple domain names?

    - by Scott J.
    I have built a website and now the customer wants to split it between three different domains. What is the best way to do this? This is what I have so far. c:/website1/ points to www.website1.com c:/website1/vd1/ points to www.website2.com c:/website1/vd2/ points to www.website3.com The webhost I'm working with has done it the following way, but now I'm getting a bunch of errors that seems like it's not seeing the App_code folder. Do I need to make a lot of changes? How does this affect the location references?

    Read the article

  • PHP: URL detection (regexp) includes line breaks

    - by marco92w
    I want to have a function which gets a text as the input and gives back the text with URLs made to HTML links as the output. My draft is as follows: function autoLink($text) { return preg_replace('/https?:\/\/[\S]+/i', '<a href="\0">\0</a>', $text); } But this doesn't work properly. For the input text which contains ... http://www.google.de/ ... I get the following output: <a href="http://www.google.de/<br">http://www.google.de/<br</a> /> Why does it include the line breaks? How could I limit it to the real URL? Thanks in advance!

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >