Search Results

Search found 3028 results on 122 pages for 'urls'.

Page 5/122 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Serve most of a domain with Apache, but use mod_proxy to serve some URLs from Lighttpd

    - by Alex Pineda
    So we wish to host some pages on a new server with apache2, and embed some of our old content & functionality from another server with lighttpd in an iframe. I'm looking at this configuration from the apache docs (http://httpd.apache.org/docs/2.2/vhosts/examples.html#page-header) under "Using Virtual_host and mod_proxy" together. <VirtualHost *:*> ProxyPreserveHost On ProxyPass / http://192.168.111.2/ ProxyPassReverse / http://192.168.111.2/ ServerName hostname.example.com </VirtualHost> The only issue is that I want to proxy only on a subdomain, or even better, if I can keep the top domain and proxy only if the url contains a particular path ie. "/myprocess.php". So in essence the DNS will point to the apache2 as the "master router".

    Read the article

  • still getting 403 on apt-get install: no proxy, urls seems valid

    - by Berry Tsakala
    i'm trying to install libreoffice (or openoffice) on Ubuntu server 12.10, the packages exist - verified with "apt-cache search", the file /etc/apt/apt.conf.d/30proxy doesn't exist on my system the text 'proxy' isn't mention in grep proxy /etc/apt/apt.conf.d/ other packages that i tried to apt-get-install -- are installed OK. the only thing i haven't done is to replace the respository servers; i'm afraid it can break the dpkg system! related questions http://askubuntu.com/questions/304340/apt-get-403-forbidden?rq=1 http://askubuntu.com/questions/303150/apt-get-403-forbidden-but-accessible-in-the-browser http://askubuntu.com/questions/409998/proxy-blocking-apt-get-allowing-wget-curl http://askubuntu.com/questions/367737/apt-get-upgrade-gives-403-forbidden-error?rq=1 What else can I do to solve this 403 error and install liber/open-office using apt?

    Read the article

  • Directing from a 1und1 hosting solution, with urls intact

    - by Jelmar
    Hi, I have done this before on GoDaddy without a hitch, but I cannot seem to figure out this particular case. I have a domain space with temporary url http://yogainun.mysubname.com/ and am hosting the domain name that is to be applied to it at 1und1.de. Right now I have set it up so that from the 1und1 domain name hosting the address http://www.yoga-in-unternehmen.de/ is frame redirected to the subdomain that I just referred to. But this is not what I want. http://www.yoga-in-unternehmen.de/ is to be the domain. With the frame redirect, url's like http://www.yoga-in-unternehmen.de/example-article do not show up. But this is what I want. With godaddy in a similar case, I just turned on DNS and changed the name servers. That worked without problem, but with 1und1 not. Is there something I am missing?

    Read the article

  • Directing from a 1und1 hosting solution, with urls intact

    - by Jelmar
    I have done this before on GoDaddy without a hitch, but I cannot seem to figure out this particular case. I have a domain space with temporary url http://yogainun.mysubname.com/ and am hosting the domain name that is to be applied to it at 1und1.de. Right now I have set it up so that from the 1und1 domain name hosting the address http://www.yoga-in-unternehmen.de/ is frame redirected to the subdomain that I just referred to. But this is not what I want. http://www.yoga-in-unternehmen.de/ is to be the domain. With the frame redirect, url's like http://www.yoga-in-unternehmen.de/example-article do not show up. But this is what I want. With godaddy in a similar case, I just turned on DNS and changed the name servers. That worked without problem, but with 1und1 not. Is there something I am missing?

    Read the article

  • Will Google Analytics track URLs that just redirect?

    - by Derick Bailey
    I have a link on my site. That links goes to another URL on my site. The code on the server sees that resource being requested and redirects the browser to another website. Will Google Analytics be able to know that the user requested the URL from my server and was redirected? Specifically, I set up a /buy link on my watchmecode.net site to try and track who is clicking the "Buy & Download" button. This link/button hits my server, and my server immediately does a redirect to the PayPal processing so the user can buy the screencast. Is Google Analytics going to know that the user hit the /buy URL on my site, and track that for me? If not, what can I do to make that happen?

    Read the article

  • How to structure a set of RESTful URLs

    - by meetamit
    Kind of a REST lightweight here... Wondering which url scheme is more appropriate for a stock market data app (BTW, all queries will be GETs as the client doesn't modify data): Scheme 1 examples: /stocks/ABC/news /indexes/XYZ/news /stocks/ABC/time_series/daily /stocks/ABC/time_series/weekly /groups/ABC/time_series/daily /groups/ABC/time_series/weekly Scheme 2 examples: /news/stock/ABC /news/index/XYZ /time_series/stock/ABC/daily /time_series/stock/ABC/weekly /time_series/index/XYZ/daily /time_series/index/XYZ/weekly Scheme 3 examples: /news/stock/ABC /news/index/XYZ /time_series/daily/stock/ABC /time_series/weekly/stock/ABC /time_series/daily/index/XYZ /time_series/weekly/index/XYZ Scheme 4: Something else??? The point is that for any data being requested, the url needs to encapsulate whether an item is a Stock or an Index. And, with all the RESTful talk about resources I'm confused about whether my primary resource is the stock & index or the time_series & news. Sorry if this is a silly question :/ Thanks!

    Read the article

  • Search engine bots accessing strange URLs

    - by casasoft
    We have ELMAH enabled on our site and get errors whenever a Page Not Found error is triggered on the website. We have recently redesigned a new website and so we understand that search engine robots might have previously indexed pages which they try to access and result in a Page Not Found errors. For this reason, we have set up permanent redirects for such previously indexed pages to the respective new pages. The website in mention is www.chambercollege.com and for example, a previously indexed URL was www.chambercollege.com/special-offers.aspx. This page is no longer accessible so we have created the necessary permanent redirect to redirect to the respective page on www.chambercollege.com/en/content/special-offers-161/. Now we are starting to receive Page Not Found errors of search engine bots (e.g. MSN bot) trying to access the URL www.chambercollege.com/special-offers.aspx/images/shadow_right.jpg/. Any idea how could a search engine make up that strange URL and whether you have any suggestions of what to do best?

    Read the article

  • Share buttons vs sharer urls

    - by TeeOh
    As some people might know, adding share buttons from Facebook and Twitter can cause a page to slow down. I've seen many sites pass on the common iframe implementations that these sites offer and simply create icons that link to a sharer url for better control of page performance. http://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Fwww.cnn.com%2F&t=CNN%26s+website%27 However, I've also read that Facebook is dropping support for these links. For example, this link now redirects to the Like Button. http://www.facebook.com/facebook-widgets/share.php Here is an article noting that Facebook is deprecating/has deprecated it's share functionality and is sticking with the Like button. http://www.barbariangroup.com/posts/7544-the_facebook_share_button_has_been_deprecated_called_it I'm assuming this is the same for the sharing url. If the sharer url is no longer a reliable option, what other methods are there besides using 3rd party widgets (like Addthis)?

    Read the article

  • RewriteRule for URLs with spaces

    - by Robert Cailliau
    My site's pages are in multiple languages whereby each language version shares its media (images) with the other language versions. I place all versions and the media in a single directory with the same name. E.g. pages mypage-en.html, mypage-fr.html etc. will sit in directory mypage. The directory path suffices to reference a page: h t t p : //....../mypage/ is good enough, there is no need for h t t p : //....../mypage/mypage-en/html A rewrite with RewriteRule ^(.*)/([a-zA-Z0-9]+)/?$ /$1/$2/$2-en.html lets me use the shorter form. But what if the name mypage contains spaces (which some do) ? I want h t t p : //....../my page/ to lead to h t t p : //....../my page/my page.html Using RewriteRule ^(.*)/([a-zA-Z0-9|\s]+)/?$ /$1/$2/$2-en.html did not work. Any hints welcome. (please do not ask me why I want to do this, nor tell me I should not use spaces in file names)

    Read the article

  • Browser Item Caching and URLs

    - by Damon
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it's very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn't want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn't need to be removed from production code - just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don't modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

  • Browser Item Caching and URLs

    - by Damon Armstrong
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it’s very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn’t want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn’t need to be removed from production code – just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don’t modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

  • How to manually list set of urls for search engines to index

    - by MarutiB
    So I have created a video website which has thousands of videos and thousands of videos get added to it on a daily basis. Here is my problem :- I have created a website which basically loads the skeleton in html and puts all the content through javascript and Ajax. The problem is search engines aren't going anywhere except for the home page. Is there a way say in robots.txt where i give a link to a single html which has links to all these videos ? I agree my site is not accessible for a non-javascript user but stats show that this ratio is very low ( 0.2 % ). Is there a way I can still keep the complete AJAX website and still get each individual videos listed on google ?

    Read the article

  • Apache2 - rewrite a bunch of specified pathname URLs to one URL

    - by James Nine
    I need to rewrite a bunch of urls (about 100 or so) for SEO purposes, and there may be more being added in the future (probably another 50-100 later on). I need a flexible way of doing this and so far, the only way I can think of is to edit the .htaccess file using the rewrite engine. For example, I have a bunch of urls like this (please note that the query string is irrelevant, and dynamic; it could be anything. I was only using them purely as an example. I am only focusing on the pathname--the part between the hostname and query string, as marked in bold below): http://example.com/seo_term1?utm_source=google&utm_medium=cpc&utm_campaign=seo_term http://example.com/another_seo_term2?utm_source=facebook&utm_medium=cpc&utm_campaign=seo_term http://example.com/yet_another_seo_term3?utm_source=example_ad_network&utm_medium=cpc&utm_campaign=seo_term http://example.com/foobar_seo_term4 http://example.com/blah_seo_term5?test=1 etc... And they are all being rewritten to (for now): http://example.com/ What's the most efficient/effective way of doing this so that I may be able to add more terms in the future? One solution I came across is to do this (in the .htaccess file): RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ / [NC,QSA] However, the problem with this solution is that even invalid urls (such as http://example.com/blah) will be rewritten to http://example.com instead of giving a 404 code (which is what it is supposed to do anyway). I'm still trying to figure out how all this works, and the only way I can think of is to write 100 more RewriteCond statements (such as: RewriteCond %{REQUEST_URI} =/seo_term1 [NC,OR]) before the RewriteRule directive. For example: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} =/seo_term1 [NC,OR] RewriteCond %{REQUEST_URI} =/another_seo_term2 [NC,OR] RewriteCond %{REQUEST_URI} =/yet_another_seo_term3 [NC,OR] RewriteCond %{REQUEST_URI} =/foobar_seo_term4 [NC,OR] RewriteCond %{REQUEST_URI} =/blah_seo_term5 [NC] RewriteRule ^(.*)$ / [NC,QSA] But that doesn't sound very efficient to me. Is there a better way?

    Read the article

  • Storing URLs while Spidering

    - by itemio
    I created a little web spider in python which I'm using to collect URLs. I'm not interested in the content. Right now I'm keeping all the visited URLs in a set in memory, because I don't want my spider to visit URLs twice. Of course that's a very limited way of accomplishing this. So what's the best way to keep track of my visited URLs? Should I use a database? * which one? MySQL, sqlite, postgre? * how should I save the URLs? As a primary key trying to insert every URL before visiting it? Or should I write them to a file? * one file? * multiple files? how should I design the file-structure? I'm sure there are books and a lot of papers on this or similar topics. Can you give me some advice what I should read?

    Read the article

  • Django admin urls return INVALID REQUEST! - Django

    - by RadiantHex
    Hi folks, my admin urls are sat behind a prefix by doing the following. 1# (r'^admin/', include(admin.site.urls)), is placed within urls_core.py 2# (r'^api/', include('project.urls_core')), is palced within urls.py All admin URLs work fine except app indexes. If I go to any URL such as: /api/admin/core/ /api/admin/registration/ /api/admin/users/ /api/admin/filters/ I receive 'INVALID REQUEST' as my response. Status code is 200 (OK) though. I have never received this error message before. Does anyone have a clue? Thanks guys!

    Read the article

  • Getting a full list of the URLS in a rails application

    - by Laurie Young
    How do I get a a complete list of all the urls that my rails application could generate? I don't want the routes that I get get form rake routes, instead I want to get the actul URLs corrosponding to all the dynmically generated pages in my application... Is this even possible? (Background: I'm doing this because I want a complete list of URLs for some load testing I want to do, which has to cover the entire breadth of the application)

    Read the article

  • groovy thread for urls

    - by Srinath
    I wrote logic for testing urls using threads. This works good for less number of urls and failing with more than 400 urls to check . class URL extends Thread{ def valid def url URL( url ) { this.url = url } void run() { try { def connection = url.toURL().openConnection() connection.setConnectTimeout(10000) if(connection.responseCode == 200 ){ valid = Boolean.TRUE }else{ valid = Boolean.FALSE } } catch ( Exception e ) { valid = Boolean.FALSE } } } def threads = []; urls.each { ur - def reader = new URL(ur) reader.start() threads.add(reader); } while (threads.size() 0) { for(int i =0; i < threads.size();i++) { def tr = threads.get(i); if (!tr.isAlive()) { if(tr.valid == true){ threads.remove(i); i--; }else{ threads.remove(i); i--; } } } Could any one please tell me how to optimize the logic and where i was going wrong . thanks in advance.

    Read the article

  • Cron job for list of urls

    - by mathew
    Duplicate http://stackoverflow.com/questions/2968918/how-do-i-do-cron-job-for-list-of-urls-closed sorry for the confusion... Ho do I do cron job using a list of urls?? actually the search path is mention below www.mydomain.com/search?url=www.google.com after google.com another url which is in the urllist.txt should be the next. so at the end of the day the whole list of urls in urllist.txt will be completed and saved in my database. what I need is how do I array urls in search path from urllist.txt.....and put it in cron job rest I can handle Thanks Mathew

    Read the article

  • How to load secure S3 images into Flex with temporary URLs

    - by Yarin
    I have some secure images on S3 that I need to load into Flex. I was expecting to be able to do this using signed temporary URLs but can't get it working. I know the URLs I'm generating are correct, because they load fine in my browsers' address bar. Moreover, Flex has no problem loading my images with a non-signed url when they are public, but as soon as I try signing the urls all the images fail, whether public or not. I've tried image.source = signedURL, image.load(signedURL), etc. If I try loading the file with URLLoader/URLStream, it looks like I'm getting the data OK, but I'm not sure how to translate those results to an Image control. Is this just an issue with the Image control not being able to recognize signed urls? Do I have to load the image from a byte array? What would that look like?

    Read the article

  • Plone, behaviour of URLs

    - by jpet
    The situation is the following: I created a site with Plone, developed, used, but behind a test URL. Now it has to be published, but the test URL is not appropriate and I don't want to move the site. I think, if I use a redirect, it won't be appear in the URL-bar, only in the case of site start page. Am I wrong? (The test URL should not be used, because it will be a "semi-official" site.) What do you suggest to do? As far as I can see Plone uses absolute URLs everywhere. I can add relative URLs, but if I create a new page, a new event, etc., then they have absolute URLs on other automatically generated inner pages. Is there any way to convert these URLs to relative paths? Is there any setting possibilty where only a checkbox changes this default setting?

    Read the article

  • Centralized way of organizing urls in Jersey?

    - by drozzy
    Forgive me if I am asking an obvious question (maybe I missed it in the docs somewhere?) but has anyone found a good way to organize their URLs in Jersey Java framework? I mean organizing them centrally in your Java source code, so that you can be sure there are not two classes that refer to the same Url. For example django has a really nice regex-based matching. I was thinking of doing something like an enum: enum Urls{ CARS ("cars"), CAR_INFO ("car", "{info}"); public Urls(String path, String args) ... } but you can imagine that gets out of hand pretty quickly if you have urls like: cars/1/wheels/3 where you need multiple path-ids interleaved with one another... Any tips?

    Read the article

  • mod_rewrite for clean URL doesn't convert the URL to clean URL (but it's accessible) [on hold]

    - by deathlock
    Basically what I want to do is to convert this: http://localhost/jariungu/user_caleg.php?idCaleg2014=3 into this: http://localhost/jariungu/caleg/3 I have managed to make /jariungu/caleg/3 to direct to the original URL (as in, if I open that URL, it directs me to the appropriate page). The problem is, once opened, the URL returns to the original, ugly one in the address bar. This is what I tried. Could someone provide a help? <IfModule mod_rewrite.c> Options +FollowSymlinks RewriteEngine On RewriteBase /jariungu/ RewriteRule ^caleg\/([0-9]+)\/([a-zA-Z]+\s*[0-9]*)/?$ caleg.php?idCaleg2014=$1&namaCaleg=$2 [NC,L] RewriteRule ^caleg\/([0-9]+)/?$ caleg.php?idCaleg2014=$1 [NC,L] </IfModule>

    Read the article

  • How to suppress PHPSESSID in URL for Googlebot?

    - by Roque Santa Cruz
    I use cookie based sessions, and they work for normal interaction with our site. However, when Googlebot comes crawling out PHP framework, Yii, needs to append ?PHPSESSID to each URL, which doesn't look that good in SERP. Any ways to suppress this behavior? PS. I tried to utilize ini_set('session.use_only_cookies', '1');, but it does not work. PPS. To get an impression of the SERP, they look like this: http://www.google.com/search?q=site:wwwdup.uni-leipzig.de+inurl:jobportal

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >