Search Results

Search found 3750 results on 150 pages for 'joomla sef urls'.

Page 13/150 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Can joomla be configured to browser through a proxy server?

    - by Aseques
    In wordpress there are some simple variables that can be set to make the configuration of a navigation proxy server trivial. var $proxy_host = ""; // proxy host to use var $proxy_port = ""; // proxy port to use var $proxy_user = ""; // proxy user to use var $proxy_pass = ""; // proxy password to use Are there any equivalents to accomplish the same for joomla? I've been searching on internet and couldn't find anything. For the original wordpress source see here

    Read the article

  • How do I fix the paths of my website?

    - by EASI
    I have Joomla 2.5.7 in my client's server updated recently from 1.6 to 1.7. I did not make that site but I am responsible for it now. I prefer to make a site from zero. Now users area clicking on the menu options and when joomla send them to a meta-url like http://iap.pa.gov.br/acervo they get the message 404 The requested URL /acervo was not found on this server. Would that be because they moved joomla folder from the root to root/iap (name of the site)? If it is, what is the configuration to adapt to that new folder?

    Read the article

  • What does reorder() function do in Joomla? See description..

    - by Parth
    What does reorder() function do in Joomla? I have a statement for a function of menu "copy" in administrator code for menu as: $curr->reorder( 'menutype = '.$this->_db->Quote($curr->menutype).' AND parent = '.(int) $curr->parent ); as i Have executed this code it has reordered my id to any other position.. I just need to know is this reorder() function a inbuilt function of joomla? If yes, WHat and how does it do what he has done it to my ids ? :( I am Newbie to Joomla PLs help EDIT How can i get the the reordered output just after calling the function?

    Read the article

  • Create Shortened goo.gl URLs in Your Favorite Browser

    - by Asian Angel
    Now that the new goo.gl URL shortening service has been active for a bit you may be wanting to add it to your favorite non Internet Explorer/Firefox browser. See how easy it is to enjoy that goo.gl URL shortening goodness with the “goo.gl with prompt for copying Bookmarklet”. goo.gl with Prompt for Copying Bookmarklet In Action To add the bookmarklet to your browser simply drag it to your “Bookmarks Toolbar” and get ready to start creating those shortened URLs. For our example we chose one of the wonderful malware removal articles here at the site. All that you will need to do is click on the bookmarklet to create your shortened URL. The wonderful thing about this bookmarklet is the small pop-up window that provides an easy way for you to copy the shortened URL. You can see the “Context Menu” for the small window here…definitely nice. Once you have copied your new URL you can close the window by clicking on “OK” or pressing “Enter”. Now you are ready to use your new shortened goo.gl URL wherever you like or need to. Conclusion If you have been wanting to add goo.gl URL shortening power to your favorite browser then this is the perfect bookmarklet to have. Links Add the goo.gl with Prompt for Copying Bookmarklet to Your Favorite Browser Similar Articles Productive Geek Tips See Where Shortened URLs “Link To” in Your Favorite BrowserVerify the Destinations of Shortened URLs the Easy WayA Quick Look at URL Shortening Services & ExtensionsCreate Shortened goo.gl URLs in Google Chrome the Easy WayText-Only URL Quick-Fix for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool

    Read the article

  • Google is re-indexing pages after redirecting URLs from HTTP to HTTPS incorrectly

    - by SLIM
    I upgraded my site so that all pages have gone from using HTTP to HTTPS. I didn't consider that Google treats HTTPS pages differently than HTTP. I recreated my sitemap to so that all links now reflect the new HTTPS URLs and let it be for a few days. (Whoops!) Google is now re-indexing all the HTTPS pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new HTTPS pages. The problem is that Google sees all of these as brand new pages when many of them have a long HTTP history. Of course most of you will recognize the problem, I didn't set up a 301 from the old HTTP to the new HTTPS URLs. Is it too late to do this? Should I switch my sitemap back to HTTP URLs and then 301 redirect to the new HTTPS URls? Or should I leave the sitemap as is, and setup 301 redirects anyway... I'm not even sure if Google is trying to reach the HTTP site anymore. Currently the site is doing 303 redirects (from HTTP to HTTPS), although I haven't figured out why yet.

    Read the article

  • Access Control Service: Programmatically Accessing Identity Provider Information and Redirect URLs

    - by Your DisplayName here!
    In my last post I showed you that different redirect URLs trigger different response behaviors in ACS. Where did I actually get these URLs from? The answer is simple – I asked ACS ;) ACS publishes a JSON encoded feed that contains information about all registered identity providers, their display names, logos and URLs. With that information you can easily write a discovery client which, at the very heart, does this: public void GetAsync(string protocol) {     var url = string.Format( "https://{0}.{1}/v2/metadata/IdentityProviders.js?protocol={2}&realm={3}&version=1.0",         AcsNamespace,         "accesscontrol.windows.net",         protocol,         Realm);     _client.DownloadStringAsync(new Uri(url)); } The protocol can be one of these two values: wsfederation or javascriptnotify. Based on that value, the returned JSON will contain the URLs for either the redirect or notify method. Now with the help of some JSON serializer you can turn that information into CLR objects and display them in some sort of selection dialog. The next post will have a demo and source code.

    Read the article

  • 500 - An error has occurred! DB function reports no errors when adding new article in Joomla!

    - by Roland
    I have an article that I want to publish on my Joomla! site. Every time I click apply or save. I get error 500 - An error has occurred! DB function reports no errors. I have no idea why this error comes up, al I can think is that it's a server error. I'm using TinyMCE to type articles together with Joomla! 1.5.11. Updated: I turned on Maximum error reporting in Joomla! and in the article manager I tried to save the article and got these couple of errors. Please check screenshot I tried adding <?php ini_set('error_reporting', E_ALL); error_reporting(E_ALL); ini_set('log_errors',TRUE); ini_set('html_errors',TRUE); ini_set('display_errors',true); ?> at the top of the index.php pages for Joomla! but it does not show any errors. I checked the error logs on the server and also no errors come up. I managed to publish the article via phpMyAdmin but then something else happens. I try to access to article from the front end, by clicking on the link to the article, but only a blank page comes up. This is really weird, since the error log does not show any information. So I assume the error needs to be coming from Joomla! This happens if I add a print_r($_POST) before if (!$row->check()) { on /administrator/components/com_content/controller.php (around line 693) Array ( [title] => Test. [state] => 0 [alias] => test [frontpage] => 0 [sectionid] => 10 [catid] => 44 [details] => Array ( [created_by] => 62 [created_by_alias] => [access] => 0 [created] => 2008-10-25 13:31:21 [publish_up] => 2008-10-25 13:31:21 [publish_down] => Never ) [params] => Array ( [show_title] => [link_titles] => [show_intro] => [show_section] => [link_section] => [show_category] => [link_category] => [show_vote] => [show_author] => 1 [show_create_date] => 0 [show_modify_date] => 0 [show_pdf_icon] => [show_print_icon] => [show_email_icon] => [language] => [keyref] => [readmore] => ) [meta] => Array ( [description] => Test. [keywords] => Test [robots] => [author] => Test ) [id] => 58 [cid] => Array ( [0] => 58 ) [version] => 30 [mask] => 0 [option] => com_content [task] => apply [ac1e0853fb1b3f41730c0d52de89dab7] => 1 ) I had a bounty on this question, but the problem is still not resolved? link text Any help will be appreciated!! Here is a link to the article (text file with the source I got from TinyMCE) Article

    Read the article

  • No description for any page on the website is available in Google despite robots.txt allowing crawling

    - by Abhijit
    I seem to have the weirdest issue with Search Engine Optimization, and I asked the IT folks at my university, I asked people on Joomla forums and I am trying to sort this issue out using Google Webmaster Tools for more than 2 months to little avail. I want to know if I have some blatantly wrong configuration somewhere that is causing search engines to be unable to index this site. I noticed a similar issue with another website I searched for online (ECEGSA - The University of British Columbia at gsa.ece.ubc.ca), making me believe this might be a concern that people might be looking an answer for. Here are the details: The website in question is: http://gsa.ece.umd.edu/. It runs using Joomla 2.5.x (latest). The site was up since around mid December of 2013, and I noticed right from the get go that the site was not being indexed correctly on Google. Specifically I see the following message when I search for the website on Google: A description for this result is not available because of this site's robots.txt – learn more. The thing is in December till around March I used the default Joomla robots.txt file which is: User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /cli/ Disallow: /components/ Disallow: /images/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /logs/ Disallow: /media/ Disallow: /modules/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/ Nothing there should stop Google from searching my website. And even more confusingly, when I go to Google Webmaster tools, under "Blocked URLs" tab, when I try many of the links on the site, they are all shown up as "Allowed". I then tried adding a sitemap, putting it in the robots.txt file. That did not help. Same exact search result, same behavior in the "Blocked URLs" tab on the webmaster tools. Now additionally, the "sitemaps" tab says for several links an error saying "URL is robotted out". I tried those exact links in the "Blocked URLs" and they are allowed! I then tried deleting the robots.txt file. No use. Same exact problem. Here is an example screenshot from Google's Webmaster Tools: At this point I cannot give a rational explanation to why this is happening and neither can anyone in the IT department here. No one on Joomla forums can seem to understand what is going on. Based on what I explained, does it seem that I have somehow set a setting in the robots.txt or in .htaccess or somewhere else, incorrectly?

    Read the article

  • Should my URLs be lowercase?

    - by Rowan Freeman
    According to this blog ("Understanding SEO Friendly URL Syntax Practices") I should change http://example.com/Hello-Dolly To http://example.com/hello-dolly The reasons given are: URLs, in general, are case-sensitive it will simplify any case sensitive SEO and analytics reports According to this GIF that I found on Wikipedia's article on URL Normalization I should convert my URLs from any uppercase to all lowercase. However I use ASP.NET MVC4 and by default my URLs are structured like this (CamelCase): http://www.domain.com/Controller/Action/Parameter http://www.greatsite.com/Categories/List/Bicycles I've skimmed through the RFC1738 but I didn't see any definitive answers to this. Should I go out of my way to force the framework to change everything to lower case? Why did Microsoft choose to design their framework like this if everybody is telling me to use lowercase?

    Read the article

  • How important are SEO Friendly URLs [closed]

    - by nute
    Possible Duplicate: Is a URL with a query string better or worse for SEO then one without one? Currently, my URLs look something like http://mydomain.ext/question/5 where question is the Controller and 5 is the ID of the object or article retrieved. In theory I could spend some development time and some server resources to have URLs that would contain more information about the page loaded. However, seeing how websites like Youtube or many others just keep simple URLs with just an ID, I am asking, does it matter? It is worth it??

    Read the article

  • Robots.txt never downloaded but some blocked URLs in GWT

    - by Zistoloen
    There is something I don't understand in Google Webmaster Tools (GWT) for my Wordpress site. In menu "Blocked URLs", it mention that my robots.txt has never been downloaded but there are some blocked URLs. It's kind of weird and not logical. Am i missing something? User-agent : * Disallow: /*? Disallow: /wp-login.php Disallow: /wp-admin Disallow: /wp-includes Disallow: /wp-content Allow: /wp-content/uploads Disallow: */trackback Disallow: /*/feed Disallow: /*/comments Disallow: /cgi-bin Disallow: /*.php$ Disallow: /*.inc$ Disallow: /*.gz$ Disallow: /*.cgi$ Disallow: /author/* I'm afraid my robots.txt doesn't block several URLs I want to block.

    Read the article

  • Is it safe to block redirected (but still linked) URLs with robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: if I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • Should I use title case in URLs?

    - by Amadiere
    We are currently deciding on a consistent naming convention across a site with multiple web applications. Historically, I've been an advocate of the 'lowercase all the letters!' when creating URLs: http://example.com/mysystem/account/view/1551 However, within the last year or two, specifically since I began using ASP.NET MVC & had more dealings with REST based URLs, I've become a fan of capitalizing the first letter of each section/word within the URL as it makes it easier to read (imho). http://example.com/MySystem/Account/View/1551 We're not in a situation where people need to read or be able to understand the URLs, so that's not a driver per se. The main thing we are after is a consistent approach that is rational and makes sense. Are there any standards that declare it good to do one way or another, or issues that we may run into on (at least realistically modern) setups that would choose a preference over another? What is the general consensus for this debate currently?

    Read the article

  • Is it safe to Block These URLs with Robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: If I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • Weird URLs being access by Googlebot

    - by Avishai
    Lately I've been seing all sorts of strange URLs show up as errors in my Webmaster tools account, but they're URLs that don't actually exist on my site, nor are linked from the pages that Google claims they're linked from. URL Response Code Detected yR3kna/5RfA4+ndtn/X4zcevudMlXbqbIrnPbH9irw= 404 9/16/12 OK4iaOVdr6Ocjmz+u1kuR5Q486mhDo/e45nwjl2+y8= 404 9/9/12 pxGz/oHEA0BS8U3VFBzJcZnnIHMsFXb3/rIxMxh2ws= 404 9/16/12 Af8tbvQ0HniIpf53I8Txz1hM1/JxxrFQxgqPuErWII= 404 9/9/12 7Bk7c0LDmm4PHyTjml017EGwNNPCn/p/0xMSWWPDic= 404 9/16/12 umCwnDvTE8ybpUB19MIb+VRj5xRJncyYGGfAQ2Mxn0= 404 9/1/12 # etc... Do you know how to make these stop? It's not at all clear to me why it would be going to these URLs in the first place.

    Read the article

  • Help! Joomla Templates!

    - by Steph
    Hi Everyone, I need to make this design: http://www.stephburningham.com/lmg/ into a joomla template but I've never made a joomla template before and the tutorials are so complex! Can anyone start me off? Thanks a lot, Steph

    Read the article

  • How to find Frenzy forum homepage on my Joomla site ?

    - by Joomdrup
    Hello. I am a noob to joomla migrating from drupal. I found a nice forum module called Frenzy. Its administration menu is nice but I am not able to find the forum page itself. I searched the menus for a hidden link but couldn't find it. I know that joomla is not the same as drupal that's why I am asking here. Where can I find the homepage of the module ?

    Read the article

  • Django admin fails when using includes in urlpatterns

    - by zenWeasel
    I am trying to refactor out my application a little bit to keep it from getting too unwieldily. So I started to move some of the urlpatterns out to sub files as the documentation proposes. Besides that fact that it just doesn't seem to be working (the items are not being rerouted) but when I go to the admin, it says that 'urlpatterns has not been defined'. The urls.py I have at the root of my application is: if settings.ENABLE_SSL: urlpatterns = patterns('', (r'^checkout/orderform/onepage/(\w*)/$','checkout.views.one_page_orderform',{'SSL':True},'commerce.checkout.views.single_product_orderform'), ) else: urlpatterns = patterns('', (r'^checkout/orderform/onepage/(\w*)/$','commerce.checkout.views.single_product_orderform'), ) urlpatterns+= patterns('', (r'^$', 'alchemysites.views.route_to_home'), (r'^%s/' % settings.DAJAXICE_MEDIA_PREFIX, include('dajaxice.urls')), (r'^/checkout/', include('commerce.urls')), (r'^/offers',include('commerce.urls')), (r'^/order/',include('commerce.urls')), (r'^admin/', include(admin.site.urls)), (r'^accounts/login/$', login), (r'^accounts/logout/$', logout), (r'^(?P<path>.*)/$','alchemysites.views.get_path'), (r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root':settings.MEDIA_ROOT}), The urls I have moved out so far are the checkout/offers/order which are all subapps of 'commerce' where the urls.py for the apps are so to be clear. /urls.py in questions (included here) /commerce/urls.py where the urls.py I want to include is: order_info = { 'queryset': Order.objects.all(), } urlpatterns+= patterns('', (r'^offers/$','offers.views.start_offers'), (r'^offers/([a-zA-Z0-9-]*)/order/(\d*)/add/([a-zA-Z0-9-]*)/(\w*)/next/([a-zA-Z0-9-)/$','offers.views.show_offer'), (r'^reports/orders/$', list_detail.object_list,order_info), ) and the applications offers lies under commerce. And so the additional problem is that admin will not work at all, so I'm thinking because I killed it somewhere with my includes. Things I have checked for: Is the urlpatterns variable accidentally getting reset somewhere (i.e. urlpatterns = patterns, instead of urlpatterns+= patterns) Are the patterns in commerce.urls valid (yes, when moved back to root they work). So from there I am stumped. I can move everything back into the root, but was trying to get a little decoupled, not just for theoretical reason but for some short terms ones. Lastly if I enter www.domainname/checkout/orderform/onepage/xxxjsd I get the correct page. However, entering www.domainname/checkout/ gets handled by the alchemysites.views.get_path. If not the answer (because this is pretty darn specific), then is there a good way for troubleshoot urls.py? It seems to just be trial and error. Seems there should be some sort of parser that will tell you what your urlpatterns will do.

    Read the article

  • How to write custom (odd) authentication plugins for Wordpress, Joomla and MediaWiki?

    - by Bart van Heukelom
    On our network (a group of related websites - not a LAN) we have a common authentication system which works like this: On a network site ("consumer") the user clicks on a login link This redirects the user to a login page on our auth system ("RAS"). Upon successful login the user is directed back to the consumer site. Extra data is passed in the query string. This extra data does not include any information about the user yet. The consumer site's backend contacts RAS, with this extra data, to get the information about the logged in user (id, name, email, preferences, etc.). So as you can see, the consumer site knows nothing about the authentication method. It doesn't know if it's by username/password, fingerprint, smartcard, or winning a game of poker. This is the main problem I'm encountering when trying to find out how I could write custom authentication plugins for these packages, acting as consumer sites: Wordpress Joomla MediaWiki For example Joomla offers a pretty simple auth plugin system, but it depends on a username/password entered on the Joomla site. Any hints on where to start?

    Read the article

  • We Convert your PSD into Xhtml

    - by Aditi
    From last few months we have been receiving a lot of inquires for  Psd into Xhtml projects, while we were majorly focusing on custom WordPress, Magento, Drupal & Joomla Projects. Now we are offering PSD into Xhtml/CSS service at an affordable price looking at its demand. We also will cater PSD into any CMS, like wordpress, Drupal, Magento or Joomla. Our custom services will continue as it is. It is very convenient to get your design converted by our Xhtml & CSS experts. We assure 24 hour delivery time. At JustSkins, we have a structured conversion model that works well for any kind of potentially enriched web business solution. Our customized slicing guidelines, besides, W3C approved XHTML and CSS code naming conventions makes us stand distinct from the competitors. Why Should You Let us do it for you? W3C Compliant HTML/XHTML and CSS Codes Well Structured and Written Code. Clean and Hand Coded Mark up no use of WYSIWYG. We offer Fast turn around timeDesign converted into Xhtml/CSS just in one business day. Multi- Browser Accessible Websites Cross-Platform Support. Excellent Customer Service. Affordable We at JustSkins are team of efficient programmers with vast experience in templating for   content management systems (CMS),  Joomla, Drupal, WordPress and other Open Source technologies. Contact us today for your requirement!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >