Search Results

Search found 3750 results on 150 pages for 'joomla sef urls'.

Page 116/150 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • using .htaccess to redirect from friendly url to actual file

    - by Kohalza
    I have the following RewriteRule in my .htaccess to redirect from a friendly url to my main application file: RewriteRule ^\/(.*).html$ home/www/page.php?p=$1 [L] This should send any url that points to a html page to page.php with the url as a parameter that will be parsed by the app. This works for urls that look like http://www.example.com/hello.html The problem is that I get a 404 error when the url contains a directory path, for example: http://www.example.com/category/hello.html The error reads: "File does not exist: /home/www/category" Seems it is first looking for the 'category' path instead of processing the .htaccess Any ideas how to solve this?

    Read the article

  • slashes in url variables

    - by namtax
    Hi there I have set up my coldfusion application to have dynamic urls on the page, such as www.musicExplained/index.cfm/artist/:VariableName However my variable names will sometimes contain slashes, such as www.musicExplained/index.cfm/artist/GZA/Genius This is causing a problem, because my application presumes that the slash in the variable name represents a different section of the website, the artists albums. So the URL will fail. I am wondering if there is anyway to prevent this from happening? Do I need to use a function that replaces slashes in the variable names with another character? Thanks

    Read the article

  • Does urllib2.urlopen() actually fetch the page?

    - by beagleguy
    hi all, I was condering when I use urllib2.urlopen() does it just to header reads or does it actually bring back the entire webpage? IE does the HTML page actually get fetch on the urlopen call or the read() call? handle = urllib2.urlopen(url) html = handle.read() The reason I ask is for this workflow... I have a list of urls (some of them with short url services) I only want to read the webpage if I haven't seen that url before I need to call urlopen() and use geturl() to get the final page that link goes to (after the 302 redirects) so I know if I've crawled it yet or not. I don't want to incur the overhead of having to grab the html if I've already parsed that page. thanks!

    Read the article

  • Apache loads any file that begins with the same string as used in url. How to prevent this?

    - by MarshallBananas
    If I point to: mywebsite.com/search and there is a file called search.php or search.html or search.inc.php or search.whatthehell.php in website's directory, Apache will point to that file instead of 404'ing. What is even more annoying is that if I point to: mywebsite.com/search/string?also=whatever Apache will still display any file with filename that begins with "search.". Also, all RewriteRules with patterns containing filenames existing in directory are ignored/useless. I'm using Apache 2 on Mac, unmodified httpd.conf. How do I prevent it from redirecting my urls so freely?

    Read the article

  • Choosing a Portal / CMS software for developing multi brand websites?

    - by hbagchi
    We are in the early stage of overhauling a multi-brand website built using a custom developed java mvc framework to enable web 2.0 features. Built-in features we are looking at are: i18n, sso, content search and indexing, personalization, mashup support, ajax support, rich media content storage and management support, friendly to search engine optimizations, bookmarkable URLs, support for social networking sites, support for page composition and decoration using templates. A combination of these features are supported by many portal and cms software. Any insights will be very helpful in using a portal/cms combination to address this requirements! This is a follow-up on this post focusing on the portal/cms angle

    Read the article

  • What are the main differences between: Seaside vs Aida vs Iliad

    - by elviejo
    What are the differences between the three Smalltalk web application frameworks? Some starting points: What is the sweet spot for each framework? in Which case would you use one or the other? What are their weaknesses? Which one has the cleanest URLs? How do they handle Ajax? Do they have some preference in their use of persistence? I'm just trying to decide which framework is appropriate for each kind of application.

    Read the article

  • url to http request object

    - by takeshin
    I need to convert string like this: $url = 'module/controller/action/param1/param1value/paramX/paramXvalue'; to url regarding current router (including translation and so on). Usually I generate the target urls using url view helper, but for this I need to specify all params, so I would need to manually explode the string. I tried to use request object, like this: $request = new Zend_Controller_Request_Http(); // some code here passing the $url Zend_Debug::dump($request->getControllerName()); // null instead of 'controllers' Zend_Debug::dump($request->getParams()); // null instead of array but this seems to be suspected. Do I need to dispatch this request? How to handle this case well?

    Read the article

  • How to handle Clean URIs in Classic ASP using PATH_INFO?

    - by Mario
    I'm trying to handle Clean URIs in a Classic ASP application. In PHP, I was able to use URIs like http://example.com/index.php/foo/bar/baz and have /foo/bar/baz available in the PATH_INFO environment variable. (I usually add a rewrite rule so I do not need the index.php segment) However, I don't seem to be able to mimic this in Classic ASP. If I try http://example.com/index.asp/foo/bar/baz, I get a 404 error. Is there a way to add a path after the index.asp segment and get the PHP like behaviour in ASP? Note: I'm currently using the workaround of rewriting URLs of the form: http://example.com/foo/bar/baz/ to index.asp?path=/foo/bar/baz since I can't seem to get index.asp/foo/bar/baz to work.

    Read the article

  • Parallel CURL function Help .. php

    - by Webby
    Hello.. Firstly let me explain the code below is just a tiny snippet of the code I'm using on the working site. Basically I'm hoping someone can help me rewrite just the function below to enable parallel CURL calls... that way it will fit nicely into the existing code without me having to rewrite the whole from the ground up like some of the samples I've been finding today any ideas? function get_data($url) { $ch = curl_init(); $timeout = 5; curl_setopt($ch,CURLOPT_URL,$url); curl_setopt($ch,CURLOPT_RETURNTRANSFER,1); curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,5); $data = curl_exec($ch); curl_close($ch); return $data; } p.s. $url goes through a huge bunch of urls in a loop already so I'd hole to keep that intact.. Help always appreciated and rewarded

    Read the article

  • Passing URIs as URL arguments in Drupal

    - by wynz
    I'm running into problems trying to pass absolute URIs as parameters with clean URLs enabled. I've got hook_menu() set up like this: function mymodule_menu() { return array( 'page/%' = array( 'title' = 'DBpedia Display Test', 'page callback' = 'mymodule_dbpedia_display', 'page arguments' = array(1), ), ); } and in the page callback: function mymodule_dbpedia_display($uri) { // Make an HTTP request for this URI // and then render some things return $output; } What I'm hoping to do is somehow pass full URIs (e.g. "http://dbpedia.org/resource/Coffee") to my page callback. I've tried a few things and nothing's worked so far... http://mysite.com/page/http%3A%2F%2Fdbpedia.org%2Fresource%2FCoffee Completely breaks Drupal's rewriting http://mysite.com/page/?uri=http%3A%2F%2Fdbpedia.org%2Fresource%2FCoffee Gives a 404 http://mysite.com/page/http://dbpedia.org/resource/Coffee Returns just "http:", which makes sense I could probably use $_GET to pull out the whole query string, but I guess I'm hoping for a more 'Drupal' solution. Any suggestions?

    Read the article

  • Good book(s) for MMORPG design & implementation?

    - by mawg
    I am a long time professional C/C++ programmer (mostly embedded systems) and a hobbyist windows & php hacker. Can anyone recommend a book(s) specifically aimed at designing and (hopefully) implementing an MMORPG? I don't need general how to design or how to code books. Maybe a really good generic games book, but I am not interested in 1st person shooters, I want to know what it takes to implement an MMORPG. Good books, maybe also good URLs. Thanks just searching eBay and Amazon threw up a whole slew of books. Amazon's customer reviews give me an idea of how good they are, and the overview tells me what areas they cover

    Read the article

  • Using md5_file(); doesn't return the md5 sometimes?

    - by Rob
    <?php include_once('booter/login/includes/db.php'); $query="SELECT * FROM shells"; $result=mysql_query($query); while($row=mysql_fetch_array($result, MYSQL_ASSOC)){ $hash = @md5_file($row['url']); echo $hash . "<br>"; } ?> The above is my code. Usually it works flawlessly on most urls, but every now and then it will just skip the md5 on a line, as if it doesn't retrieve it, even though the file is there. I can't figure out why. Any ideas?

    Read the article

  • Can an URL shortener pass parameters?

    - by ManniAT
    Hi, I use bit.ly to shorten my urls. My problem - paramters are not passed. Let me explain I use http://bit.ly/MYiPhoneApps which redirects (let's say) to http://iphone.pp-p.net/default.aspx Now when I try http://bit.ly/MYiPhoneApps?param=xx this param is not added to the resulting url. I know I could create an extra "short url" including a paramter - so http://bit.ly/WithParam would result in http://www.mysite.com/somepath/apage.aspx?Par1=yy and so forth. But what I want is to have a short URL directing to a page - and then I want to add a parameter to this shortened url - which shoul (of course) land at my page. Is this a shortcome of bit.ly (and others are maybe able to do it) - or does "parameter forwarding" not work with 301 redirections? Manfred

    Read the article

  • problems with url and email regex when searching text

    - by Grant Collins
    Hi, I'm having problems with regular expressions that I got from regexlib. I am trying to do a preg_replace() on a some text and want to replace/remove email addresses and URLs (http/https/ftp). The code that I am have is: $sanitiseRegex = array( 'email' => /'^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$/', 'http' => '/^(http|https|ftp)\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}(:[a-zA-Z0-9]*)?/?([a-zA-Z0-9\-\._\?\,\'/\\\+&amp;%\$#\=~])*$/', ); $replace = array( 'xxxxx', 'xxxxx' ); $sanitisedText = preg_replace($sanitiseRegex, $replace, $text); However I am getting the following error: Unknown modifier '/' and $sanitisedText is null. Can anyone see the problem with what I am doing or why the regex is failing? Thanks

    Read the article

  • Automated URL checking from a MySQL table

    - by Rob
    Okay, I have a list of URLs in a MySQL table. I want the script to automatically check each link in the table for 404, and afterward I want it to store whether the URL was 404'd or not, as well as store a time for last checked. Is this even possible to do automatically, even if no one runs the script? ie, no one visits the page for a few days, but even with no one visiting the page, it automatically ran the test. If its possible, how could I go about making a button to do this?

    Read the article

  • Best Website Statistics tool for Drupal

    - by Olav
    What is the best free Website statistics setup I can have for Drupal 6 on Apache? Particularities: 1. Multisite install. Might want to look over several sites. Should be able to restrict view for clients to their own site. Some hits are bypassing Drupal. Some urls are not public. Some sites have little traffic, it would be nice to be able to exclude "own" traffic. Logged in users are not so important (It seems Google Analytics is popular)

    Read the article

  • Should I have a separate copy of all CakePHP files for every new application?

    - by BicMan
    I'm extremely new to CakePHP. From what I've gathered, it seems like I can have multiple applications that all share the same app and cake directories. So, let's say I have two applications. CakeFacebookApp and GenericCakeBlog. These applications are completely separate from each other and will have completely separate URLs, but they will reside on the same webhost. Should they both be within the same cake structure, or should they each have a full cake install in separate directories? Technically, I'm sure it will work either way, but I guess I'm looking for a best practice approach. Thanks.

    Read the article

  • Locking down multiple sites in Sitecore

    - by adam
    Hi I have two sites running under one Sitecore 6 installation. The home nodes of the sites are as such: /sitecore/content/Home /sitecore/content/Careers Assuming the primary site is at domain.com, the careers site can be accessed at careers.domain.com. My problem is that, by prefixing the uri with /sitecore/content/, any sitecore item can be accessed by either (sub)domain. For example, I can get to: http://domain.com/sitecore/content/careers.aspx (should be under careers.domain.com) http://careers.domain.com/sitecore/content/home/destinations.aspx (should be under domain.com). I know I can redirect these urls (using IIS7 Redirects or ISAPIRewrite) but is there any way to 'lock' Sitecore down to only serve items under the configured home node for that domain? Thanks, Adam

    Read the article

  • How to transform html anchor <a> to WordML

    - by Monomachus
    Hi, I need to transform an anchor tag to WordML without using relationships. Is it possible? I found the w: anchor property but seems it refers only to internal document anchors and not to links or URLs. <w:hyperlink w:anchor="chapter3"> <w:r> <w:t>Go to Chapter Three</w:t> </w:r> </w:hyperlink> So it would be great if something similar would be possible to do without making an Id in relationship document and than referring this id from w:hyperlink. Anyone knows something like that?

    Read the article

  • Google App Engine - SiteMap Creation for a social network

    - by spidee
    Hi all. I am creating a social tool - I want to allow search engines to pick up "public" user profiles - like twitter and face-book. I have seen all the protocol info at http://www.sitemaps.org and i understand this and how to build such a file - along with an index if i exceed the 50K limit. Where i am struggling is the concept of how i make this run. The site map for my general site pages is simple i can use a tool to create the file - or a script - host the file - submit the file and done. What i then need is a script that will create the site-maps of user profiles. I assume this would be something like: <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://www.socialsite.com/profile/spidee</loc> <lastmod>2010-5-12</lastmod> <changefreq>???</changefreq> <priority>???</priority> </url> <url> <loc>http://www.socialsite.com/profile/webbsterisback</loc> <lastmod>2010-5-12</lastmod> <changefreq>???</changefreq> <priority>???</priority> </url> </urlset> Ive added some ??? as i don't know how i should set these settings for my profiles based on the following:- When a new profile is created it must be added to a site-map. If the profile is changed or if "certain" properties are changed - then i don't know if i update the entry in the map - or do something else? (updating would be a nightmare!) Some users may change their profile. In terms of relevance to the search engine the only way a google or yahoo search will find the users (for my requirement) profile would be for example by means of [user name] and [location] so once the entry for the profile has been added to the map file the only reason to have the search-bot re-index the profile would be if the user changed their user-name - which they cant. or their location - and or set their settings so that their profile would be "hidden" from search engines. I assume my map creation will need to be dynamic. From what i have said above i would imagine that creating a new profile and possible editing certain properties could mark it as needing adding/updating in the sitemap. Assuming i will have millions of profiles added/being edited how can i manage this in a sensible manner. i know i need a script that can append urls as each profile is created i know the script will prob be a TASK - running at a set freq - perhaps the profiles have a property like "indexed" and the TASK sets them to "true" when the profiles are added to the map. I dont see the best way to store the map - do i store it in the datastore i.e; model=sitemaps properties key_name=sitemap_xml_1 (and for my map sitemap_index_xml) mapxml=blobstore (the raw xml map or ror map) full=boolean (set true when url count is 50) # might need this as a shard will tell us To make this work my thoughts are m cache the current site map structure as "sitemap_xml" keep a shard of url count when my task executes 1. build the xml structure for say the first 100 urls marked "index==false" (how many could u run at a time?) 2. test if the current mcache sitemap is full (shardcounter+10050K) 3.a if the map is near full create a new map entry in models "sitemap_xml_2" - update the map_index file (also stored in my model as "sitemap_index" start a new shard - or reset.2 3.b if the map is not full grab it from mcache 4.append the 100 url xml structure 5.save / m cache the map I can now add a handler using a url map/route like /sitemaps/* Get my * as map name and serve the maps from the blobstore/mache on the fly. Now my question is does this work - is this the right way or a good way to start? Will this handle the situation of making sure the search bots update when a user changes their profile - possibly by setting the change freq correctly? - Do i need a more advance system :( ? or have i re-invented the wheel! I hope this is all clear and make some form of sense :-)

    Read the article

  • How can I request local pages in the background of an ASP.NET MVC app?

    - by flipdoubt
    My ASP.NET MVC app needs to run a set of tasks at startup and in the background at a regular interval. I have implemented each task as a controller action and listed the app-relative path to the action in the database. I implemented a TaskRunner process that gets the urls from the database and requests each one at a regular interval using WebRequest.Create, but this throws a UriFormatException. I cannot use this answer or any code that plucks values from HttpContext.Current.Request without getting an HttpException with the message "Request is not available in this context". The Request object is not available because my code uses System.Threading.Timer to do background processing, as recommended here. Here are my questions: Is there really no way to make local web requests within an ASP.NET web app? Is there really no way to dynamically ascertain the root path to the web app even using static dependencies in ASP.NET? I was trying to avoid storing the app's root path in the database (as FogBugz does with its "Maintenance Path"), but is this best option?

    Read the article

  • Randomly Losing Session Variables Only In Google Chrome & URL Rewriting

    - by Toby
    Using Google Chrome, I'm seemingly losing/corrupting session data when navigating between pages (PHP 5.0.4, Apache 2.0.54). The website works perfectly fine in IE7/8, Firefox, Safari & Opera. The issue is only with Google Chrome. I narrowed down the problem. I'm using search friendly URL's, and hiding my front controller (index.php) via a .htaccess file. So the URL looks like: www.domain.com/blah/blah/ Here's the .htaccess file contents: Options +FollowSymlinks RewriteEngine on #allow cool urls RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*) index.php [L] #allow to have Url without index.php If I remove the .htaccess file, and expose the front controller in the URL: www.domain.com/index.php/blah/blah/, Chrome works perfectly fine. Any thoughts ideas? I'm thinking it's some kind of problem with how Chrome identifies what cookie to use and send to the server? This happens in Chrome 4 & 5. Thanks!

    Read the article

  • passing url in parameters mvc4

    - by user516883
    I have a site that collects urls. A full http url is enter into a texbox. I am getting a 400 error when a url is being passed in the parameter, It works fine with regular text. Using jquery how can I pass the full URL in my application. Thanks for any help. MVC Routing Config routes.MapRoute("UploadLinks", "media/upload_links/{link}/{albumID}", new { controller = "Media", action = "WebLinkUpload" }); Controller Action public ActionResult WebLinkUpload(string link, string albumID){} Jquery ajax call $('#btnUploadWebUpload').click(function () { $.ajax({ type: "GET", url: "/media/upload_links/" + encodeURIComponent($('#txtWebUrl').val().trim()) + "/" + currentAlbumID, contentType: "application/json; charset=utf-8", dataType: "json", success: function (result) { } }); });

    Read the article

  • match 'article' in url RewriteRule

    - by daniel Crabbe
    hello there. building a site which has content for each section. urls range from; /work/ /work/print/ /work/print/folders etc. however, at any point a user can click on an article so; /work/article/1066 /work/print/article/1066 /work/print/folders/article/1066 using .htaccess i need to detect when there is 'article' in the url and set some different variables. RewriteRule ^([a-zA-Z0-9\-]+)/([a-zA-Z0-9\-]+)/([a-zA-Z0-9\-]+)/([a-zA-Z0-9\-]+)/$ sets index.php?level1=$1&level2=$2&level3=$3&level4=$4 but if 'article/([0-9-]+)' is in the url, say /work/print/article/1066 return index.php?level1=$1&level2=$2&articleID=1066 basically the amount of levels will always be different but i'd like to return those as needed. another example would /work/print/folder/archive/article/1066 return index.php?level1=$1&level2=$2&level3=$3&level4=$4&articleID=1066 any help appreciated! Dan

    Read the article

  • SQL SELECT with time range

    - by nLL
    Hi, I have below click_log table logging hits for some urls site ip ua direction hit_time ----------------------------------------------------- 1 127.0.0.1 1 20010/01/01 00:00:00 2 127.0.0.1 1 20010/01/01 00:01:00 3 127.0.0.1 0 20010/01/01 00:10:00 .... ......... I want to select incoming hits (direction:1) and group by sites that are: from same ip and browser logged within 10 minutes of each other occured more than 4 times in 10 minutes. I'm not sure if above was clear enough. English is not my first language. Let me try to explain with an example. If site 1 gets 5 hits from same ip and browser with in 10 minutes after getting first unique hit from that ip and browser i want it to be included in the selection. Basically I am trying to find abusers.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >