Search Results

Search found 9763 results on 391 pages for 'ys pro'.

Page 61/391 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Forwarding a subdomain to main domain using Godaddy.

    - by Ryan Hayes
    I have current blog, which was hosted on Tumblr at http://blog.ryanhayes.net. I'm moving it over to http://ryanhayes.net, and have all the 301 redirects set up for the blog entries to map to my new blog, which is hosted using Godaddy (domain included). When I try to set up a subdomain forward, I'm greeted with a nice 403 Forbidden response (as of this writing, you can see it at http://blog.ryanhayes.net. When I try to ping both the subdomain and domain, they point to the same IP address, so I know blog subdomain has at least switched over to point to the same content. I don't really understand why I would get a 403 Forbidden on the same content that I can see perfectly fine via another domain. Currently, I have a CNAME of blog pointing to @, which is how "www" is set up to forward, so I'm assuming it would do the same thing. My question is what is the proper way to set up my DNS to make the blog subdomain forward to my main domain (301) using the GoDaddy DNS manager? Bonus: What is the background on why I am getting a 403 error the current way? Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. UPDATE 12/7/2010 Error on site has been fixed, you can no longer view it from my site.

    Read the article

  • Managing 404 error pages with noindex and url rewrite

    - by ZenMaster
    Currently I use custom 404 error pages, having the following meta on them : <meta content="noindex" name="robots"> My guess is this way Google will remove deleted pages faster from the index, anyone has experienced a case where it does ? Also, is it better to have the url path rewritten to the actual error page, like the url pattern: http://{mysite}/{404_error_page} or is it best to keep the old deleted page's url when serving a 404 error ?

    Read the article

  • Switching website DNS with minimal downtime

    - by Gavin
    I need to change my domain's DNS, but here in Thailand the ISPs don't follow any standards for renewing their cache of records, so it can take the full two days. I have a new website with about 1,000 members currently who are sending messages and uploading photos. I don't want both sites on the old and new server running at the same time because then I'd have to manually merge the database and transfer uploaded photos. It would be a huge pain. Both servers also a unique URL from my host, e.g. website12345.hostingcompany.com (old host) and website67890.hostingcompany.com (new host). I don't have much experience with this, but I think what I can do is on www.mysite.com, use .htaccess to do a masked redirect to the new server's URL (website67890.hostingcompany.com). Is it possible to do this and keep all URLs being masked? For example, www.mysite.com/profile/username will actually be loading website12345.hostingcompany.com/profile/username. From Google searches it sounds like this is possible, but I don't understand why this is possible due to security issues, since what's keeping people from masking their site to URLs like facebook.com? I could really use some advice here! Thanks!

    Read the article

  • In Google webmaster tools, can a "soft 404" be triggered by the text on the page?

    - by Stephen Ostermiller
    I just ran across an error in Google Webmaster Tools that I have never seen before. I manage the website for my local community band (I play trombone). One of the pages on the site is a list of our upcoming performances. It is powered by a WordPress events plugin that uses a database of upcoming events that are entered through the administration interface. We just finished up our summer and fall concerts and our next performance will be our Christmas concert. I hadn't gotten around to adding that into the website yet, so there are no upcoming events shown on the page. In fact the text on the page says: No upcoming events listed under Performance. Check out past events for this category or view the full calendar. Then in Google Webmaster Tools, this page is showing up as a "soft 404": The page is returning a 200 status and Google is indicating that he 404 is "soft". I wouldn't have expected Googlebot to be as sophisticated to parse that particular sentence. Is Googlebot able to detect that the text on the page indicates that there is currently not content and then treat it as a 404 page because of that? If Google is treating this page as a soft 404 because of the text on the page, does that mean that like regular 404 pages, the page won't show up in search results?

    Read the article

  • Soft 404 error on redirected outbound links

    - by Techlands
    I have a redirect script on my site which sends visitors to an affiliate site. However in the last month I've noticed that Google webmaster tools is reporting my outbound links as a 404 error. Here is the breakdown on how its setup: My outbound links are coded like this: <a href="/f/c123" rel="nofollow" target="_blank">Link Title</a> My redirect script will then perform a 302 redirect to the affiliate link Originally I had the affiliate links (CJ) directly in the HTML, however I noticed over time that this had some impact on my sites traffic. So I changed them to a redirect script and my traffic returned. This seemed to work with no issues for over 1 year but now I'm getting soft 404 errors in Google webmaster tools. I did try adding a rule in my robots.txt to block any links starting with /f/ but I'm not sure if this will help or Google will still report soft 404 errors. I am considering as possible options to change the a tag to a button tag and use an onclick event to load the link.

    Read the article

  • Apache2 "pseudo" doc root

    - by Brent
    I have several folders in my /www folder that contain various applications. To keep things organized, I keep them in their own folders -- this includes my base application. Examples: phpmyadmin = /www/phpmyadmin phpvirtualbox = /www/phpvirtualbox root domain site = /www/Landing The reason I segregate all of my sites is that I actively develop on some of these (my root site) and when I publish via Visual Studio, I choose to delete prior to upload - if I put the Landing page in the base folder, it would be devastating for me. My goal is that when I go to www.example.com - I go to my page. If I go to www.example.com/phpmyadmin, it does not work because of this in the Apache2 folder: <Location "/"> # Error is the "/" Allow from all Order allow,deny MonoSetServerAlias domain SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> If I change the location to say "/Other", then the base site is broken, and the aliases are restored for the other sites. If it is "/", then the base site works and no aliases work. What could I do to allow it to treat my /www/Landing as my webroot, but when I go to an alias, it GOES to the alias. Edit: Added in the default VirtualHost info. DocumentRoot /var/www <VirtualHost *:80> ServerAdmin [email protected] ServerName www.example.com ExpiresActive On ExpiresByType image/gif A2592000 ExpiresByType image/png A2592000 ExpiresByType image/jpg A2592000 ExpiresByType image/jpeg A2592000 ExpiresByType text/css "access plus 1 days" MonoServerPath domain "/usr/bin/mod-mono-server4" MonoDebug domain true MonoSetEnv domain MONO_IOMAP=all MonoApplications domain "/:/var/www/Landing" RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule (.*) /Landing/$1 [L] #Need to watch what the Location is set to. Can cause issues for alias <Location "/"> Allow from all Order allow,deny MonoSetServerAlias domain SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • Custom Upload Advanced Scripting CMS

    - by bradlis7
    I am looking for a specific content management platform that would display themes for my application. Requirements are as folllows: Any user can upload content, but has to be approved by an administrator When the user uploads the content, an external application is called to generate a thumbnail I could create this using codeigniter or something, but I would much prefer to use an existing system. I have experience with Drupal (seems a little bloated for my needs), and Wordpress (I'm using it as main website right now). Maybe I need a plugin for WordPress instead of another CMS. WordPress currently blocks uploads of my file type. I can modify it, but it's a pain to update it every time WordPress has a new release.

    Read the article

  • How to prevent Google Website Optimizer from making Google Analytics spike Direct Traffic and lower Bounce Rate?

    - by Scott
    I am using Google Website Optimizer (GWO) and Google Analytics. Whenever a person (Google Website Optimizer) does a javascript redirect, Google Analytics will change the referrer. When the referrer changes, the traffic source becomes yourself and is changed to Direct Traffic. For Example: A visitor goes to google: searches for my great service. Clicks the link that goes to website page: /home/ At this point, Google Analytics tracks the referrer as Google. However, /home/ has a GWO javacript redirect to a battery of A/B tests. /home-1/ or /home-2/ or /home-3/ When the redirect from /home/ occurs to /home-1/, google analytics on the /home-1/ page now thinks the referrer is yourself and converts the referrer to Direct Traffic since the Direct Traffic bucket is the unknown. I'm really surprised that GWO and GA do this when they both come from google. Now, How does one fix this to prevent the overwrite of the referrer using GWO?

    Read the article

  • Why does XFBML work everywhere but in Chrome?

    - by Andrei
    I try to add simple Like button to my Facebook Canvas app (iframe). The button (and all other XFBML elements) works in Safari, Firefox, Opera, but in Google Chrome. How can I find the problem? EDIT1: This is ERB-layout in my Rails app <html xmlns:fb='http://www.facebook.com/2008/fbml' xmlns='http://www.w3.org/1999/xhtml'> ... <body> ... <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId: '<%= @app_id %>', status: true, cookie: true, xfbml: true }); FB.XFBML.parse(); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js#appId=<%=@app_id%>&amp;amp;xfbml=1'; document.getElementById('fb-root').appendChild(e); }()); FB.XFBML.parse(); </script> <fb:like></fb:like> ... JS error message in Chrome inspector: Uncaught ReferenceError: FB is not defined (anonymous function) Uncaught TypeError: Cannot call method 'appendChild' of null window (anonymous function) Probably similar to http://forum.developers.facebook.net/viewtopic.php?id=84684

    Read the article

  • Error 404 after rewrite query strings with htaccess

    - by Cristian
    I'm trying to redirect the URLs of a client's website like this: www.localsite.com/immobile.php?id_immobile=24 In something like this: www.localsite.com/immobile/24.php I'm using this rule in .htaccess but it returns a 404 error page. RewriteEngine On RewriteCond %{QUERY_STRING} ^id_immobile=([0-9]*)$ RewriteRule ^immobile\.php$ http://localsite.com/immobile/%1.php? [L] I have tried many other rules, but none work. What can I do?

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • Falcoda Sub Domains [on hold]

    - by harry_p_6
    I run a website on Falcoda, using home package, but the option to edit sub domains has gone. Please help! I have already setup some using the hosting console previously, but it was updated and now the link is gone. I found that you can use web.hostingconsole.info/sub-domains.cgi , but it says I can have 0 sub domains and I currently have -5 left. On the home package it says you have unlimited. Is this a problem with mine only, or is this up with everybodys?

    Read the article

  • Can preventing directory listings in WordPress upload folders cause Google ranking drops when they cause 403 errors in Webmaster Tools?

    - by Kelly
    I recently moved to a new host that blocks crawling to my uploads folders but (hopefully) allows the files in the folder to be crawled. I now show many 403 errors for each folder in the uploads folder in my Webmaster Tools. For example, http://www.rewardcharts4kids.com/wp-content/uploads/2013/07/ shows a 403 error. For example, I can access this file: http://www.rewardcharts4kids.com/wp-content/uploads/2013/07/lunch-box-notes.jpg but I cannot access the folder it is in. My rankings went down after I moved to this host and I am wondering if: this could be the reason. is this how files/folders are supposed to be set up?

    Read the article

  • New Wordpress posts generate 404 error.

    - by Steve
    I had a working installation of WordPress, and I recently encountered an issue where when I tried to login to the back-end, the browser would redirect to the login URL of the previous domain WordPress was installed on. I fixed this by reinstalling WordPress, and I can now login to the backend, but any new posts I make, or old posts I have generate 404 errors. Additionally, if I try to navigate to any category page, I again receive a 404 error. I have looked at the wp_posts table of my database, and the GUID field each contains the correct domain name and URL structure. What should I be checking here? Site in question.

    Read the article

  • (Blogger) Map GoDaddy Domain For Blogger Custom Domain

    - by zulhfreelancer
    I just bought a new domain from GoDaddy (nurayka.net) and I want to use it for my .blogspot.com blog now. Here is my Blogger settings. And here is my GoDaddy DNS settings. After more than 24 hours, I still can't view my blog with that custom domain. It seems that it might be something wrong with my DNS settings. Does my DNS settings correct? Does GoDaddy Domain Forwarding should be enabled from 'nurayka.net' to 'www.nurayka.net'? Note: Before this, I have go through the GoDaddy Blogger DNS Setup and CNAME Tutorial. In the GoDaddy Blogger DNS Setup, I entered 'www.nurayka.net' and in the CNAME record (www), I entered 'ghs.google.com'. Thank You!

    Read the article

  • SEO Metatags in Web Forms and MVC

    - by Mike Clarke
    Has anyone got any ideas on where we should be adding our SEO metadata in a Web Forms project? Why I ask is that currently I have two main Masterpages, one is the home page and the other one is every other page, they both have the same metadata that’s just a generic description and set of key words about my site. The problem with that is search engines are only picking up my home page and my own search engine displays the same title and description for all results. The other problem is the site is predominantly dynamic so if you type in a search for beach which is a major categories on the site you get 10000 results that all go to different information but all look like a link to the same page to the user. edit It's live here www.themorningtonpeninsula.com

    Read the article

  • Why Alexa has two rankings for my website?

    - by MIH1406
    For the following website: Noaoomah Q&A I have two different Alexa rankings as follows: 1) The public ranking on Alexa siteinfo page of the website, that is the usual ranking page and it indicates a rank of 318,254 which they claim it is updated daily: http://www.alexa.com/siteinfo/noaoomah.com 2) Another public and daily ranking of the same website but it is viewable using either the freely avaliable list for Top 1,000,000 Sites in this page at the almost top right or using StatsCrop website and this ranking indicates a rank of 253,753. Which one is more accurate? Why different daily rankings?

    Read the article

  • Converting web.config from IIS6 to IIS7 format

    - by jamesbee
    I'm a bit stuck, kinda been lumbered with a website developed over a year ago. The company that designed it and the company that own it dont now speak so I have been lumbered with trying to get it to work. Bought the web space and have loaded it on to one of our sub domains while I get it working. Problem is that the Hosting provider is running ISS7 and the web.config was designed in IIS6 so am getting an error500 cause the tags are wrong. Could anyone give me some pointers on how to migrate the current web.config file over to IIS7.

    Read the article

  • Lost website rankings as a result of using a 302 redirect instead of a 301

    - by george peris
    I have an online store and during the last 2 days I noticed a big change in its SERPS for some keywords. The site is indexed in local Google for 5 keywords, and the online store is based on Magento 1.7. 5 days ago, I set up an SSL certificate in the online store in order to get better ranking results. I setup the SSL and followed the instructions from the Magento to make permanent 301 redirects from the HTTP URLs to the HTTPS URLs. I replaced all the URLs of the online store with HTTPS. When I saw in the SERPS that the rankings for some keywords were going up and down, I checked some URLs to see if all was going well with 301 redirects and found that the redirects where 302 and not 301, which is a big bug caused by Magento. I solved the problem with the .htaccess, but still the rankings for the homepage have disappeared. I fetched the site again using Google Webmaster Tools and am waiting to see. Can you please advise if I did something wrong, and what else I could do?

    Read the article

  • SEO consequences for merging country sites in a .com

    - by Pekka
    I am in the process of refactoring a number of rental portals I've built for a company with locations in Austria, Germany, Switzerland, and the Netherlands. Instead of the current setting of each country site running under its own domain name: www.companyname.de www.companyname.ch www.companyname.at I would love to merge them all in this way: www.companyname.com/de www.companyname.com/ch www.companyname.com/at with the country TLDs doing a 301 redirect to the respective .com address. However, I have been repeatedly told not to do this due to likely problems with SEO - the business is very SEO dependent, and being a rental chain, needs to be strong in local results. So the question is: Is there an unavoidable hit in Search Engine Optimization when redirecting to a central .com domain? What measures can be taken to soften the blow? What comes to my mind is explicitly specifying a lang attribute in the html tag. Are there any other ways to specifically point out geographical location for sub-directories?

    Read the article

  • Could breadcrumbs be considered as rich anchor text, and drop my ranking?

    - by Gkhan14
    I have breadcrumbs on my site, and the anchor text for the links are an exact match for the keyword I want to rank for. So in other words, it's "rich" anchor text. An example of my breadcrumbs could look like: Home Apple Cheats Apple Points Cheats Apple Seeds Game Cheats Well I've also used RDF markup to construct my breadcrumbs, following the Google article Rich snippets - Breadcrumbs which will allow Google to recognize it. So overall, do I need to worry about exact match anchor text that will get me penalized by something like Penguin? If so, I can remove the breadcrumbs easily.

    Read the article

  • Homepage 301 Redirect to SSL Homepage

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have 1 site where we implemented a 301 redirect on the homepage from http to https. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our webmaster account I notice that we are not being provided with any webmaster information (search queries, backlinks) related to our homepage under SSL. I performed a Fetch Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried that the fact that Google Fetch is not getting the correct Title Tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the SiteMap to ensure that Google is correctly indexing all our pages and being able to flow from the https to the http without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Major Google not follow increase since introducing 301 to site

    - by jakob
    Recently we implemented Varnish in front of our web nodes so that the backend would get some rest from time to time. Since varnish is case sensitive and our app was not we implemented a 301 in varnish to redirect to small case. Example: You search for PlumBer StockHOLM you will get a 301 redirect to plumber stockholm and then plumber stockholm will be cached. This worked as a charm, but when checking the Google webmaster tools we suddenly got a crazy amount of Status - Not able to follow errors. As you can see in the image below: This of course stirred up some panic and I started to read up on the documentation once again. If I pressed on one of the links I got to the help section where i found this: Well this is strange, but as the day progressed more and more errors were thrown by Google. We took the decision to make varnish return 200 instead of the 301. Now when testing the links that appears in the Not able to follow section I get a 200 back. I have tested with Chrome, curl and lynx reader and everything looks ok but the amount of errors are still increasing. What is a little bit comforting is that the links that appears in the Not able to follow section are dated before the 200 change in varnish. Why do I get these errors and why do they keep increasing? Did google release something new on October 31? Maybe I do not understand the docs correctly?

    Read the article

  • How can I block abusive bots from accessing my Heroku app?

    - by aem
    My Heroku (Bamboo) app has been getting a bunch of hits from a scraper identifying itself as GSLFBot. Googling for that name produces various results of people who've concluded that it doesn't respect robots.txt (eg, http://www.0sw.com/archives/96). I'm considering updating my app to have a list of banned user-agents, and serving all requests from those user-agents a 400 or similar and adding GSLFBot to that list. Is that an effective technique, and if not what should I do instead? (As a side note, it seems weird to have an abusive scraper with a distinctive user-agent.)

    Read the article

  • Google adwords API - credit card safety question

    - by anon
    Hi, Google is asking me to fax credit card xerox in order to activate adwords API in MCC. This really sucks imo... they could have had prepaid option. In any case, my questions: 1) Are there alternatives to this - is there a 3rd party provider who will give me this service without me sending them the credit card info? 2) How secure is it to send my credit card fax via some online fax service? 3) Do you think they will reject the application if I hide my CVV number in the fax? Any other thoughts appreciated :) thanks

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >