Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 108/216 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • To Fix HTTP 400-499 error codes with 301 redirects in .htaccess file

    - by user2131844
    Google previously indexed my websites pages (sitemap.xml) with below format: www.domain.com/2013/04/18/hot?test-gadgets-of-2013-to-include-in-?your-list www.domain.com/2013/02/09/rin?gdroid I have resubmitted the sitemap but there are still 404 errors in Google/Bing engine. Could you please help me to write 301 redirects rule in .htaccess file so when some clicks the URL for: www.domain.com/2013/02/09/rin?gdroid They should be redirected to: www.domain.com/rin?gdroid How we can write rule in .htaccess file to remove date part 2013/02/09/?

    Read the article

  • How to set up Google DFP (DoubleClick for Publishers) for a site?

    - by Manoj Kumar
    I have a website and I have an AdSense account as well. I have integrated AdSense and ads are also getting displayed (480 x 60). Somewhere I read that I can manage the ads that are being shown in my website (480 x 60) and filter out the ads on a CPM/CPC basis. NOTE: I don't have any ads to be displayed on other's websites. I just want to show other's ads on my website. Now, can I use Google DFP to manage the ads? I mean is Google DFP useful for me to filter the ads and get me more revenue?

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • Turning my website into a non-profit

    - by GoodbyeWebsite
    I was given an established website about a year ago. It makes some money on ads, but not much (less than 10k/year). I am no longer able to manage the site and am considering turning it over to another person. I don't want them to turn around and sell it. It was suggested that I establish a non-profit of sorts and transfer the assets to that entity. Does anybody have experience or advice on this subject?

    Read the article

  • When will my old page stop appearing on Google?

    - by Bane
    I recently bought a new address for my Blogger blog, from yannbane.blogspot.com to www.yannbane.com. However, www.yannbane.com addresses do not appear when they are searched for! Is this natural? How much time will it take for Google to update its index? yannbane.blogspot.com 301's to www.yannbane.com. Both are added to my Webmaster Tools account, but it shows no data for www.yannbane.com (strangely). And, finally, is there something I could do to speed up the process?

    Read the article

  • Ditch cPanel / WHM in favour of manual seup

    - by BWRic
    We currently use cPanel / WHM on a reseller account but are looking at getting a dedicated server. My first thought was to duplicate this set up on the dedicated box to allow us to quickly create new accounts. I'll be a managed server so they'll have set up the LAMP stack. I'm curious if I actually need cPanel and WHM. We don't use many of the features from cPanel / WHM, just creating accounts and databases, clients do not have FTP access. I'm no sys admin and come from a Windows / GUI background but have some knowledge in setting up development servers. WHM: Creating accounts I presume this sets up the Apache virtual host, FTP access and DNS settings. I've some knowledge of editing the Apache files to create virtual hosts. Am I correct in thinking as long as the DNS is pointing to the server IP and the virtual host is configured the server can serve the (php) pages? I'm not sure I need per site FTP access as only we will have access so I could have a server wide/htdocs only access to view all the site. The company who supply the dedicated hosts would also provide the own DNS management tool so I'm not need to cPanel one. MySQL: Creating users and databases We use cPanel to create the MySQL users and databases. As it's a dedicated box and I can have root access I think this could be replaced by SQLyog for db management and phpMyAdmin for user management. Do you I need cPanel or can I get by editing a few text files for creating the accounts, then use the MySQL tools for databases? Or am I missing something major with how the sites are configured?

    Read the article

  • clean urls using .htaccess

    - by Napster
    I am trying to implement clean urls using .htaccess. Basically after searching for some time I found out this code RewriteRule latestnews/([a-zA-Z0-9]+)/$ http://thinkmovie.in/index.php/latestnews/?nid=$1 [L] RewriteRule latestnews/([a-zA-Z0-9]+)$ http://thinkmovie.in/index.php/latestnews/?nid=$1 [L] so when I try to access the following url http://thinkmovie.in/index.php/latestnews/272 it redirects to http://thinkmovie.in/index.php/latestnews?nid=272 But what I want is to retain the url in the browsers address bar as http://thinkmovie.in/index.php/latestnews/272

    Read the article

  • VPS Server OS differences

    - by silvercover
    I have two VPS servers. one of them is running Linux and the other is Windows one. I've uploaded same file to their public_html folders and could see them in my browser via static IP address of each one like http://178.63.165.178/getorder/file.xml and http://178.63.165.178/getorder/file.xml. On the other side there is a device called SMSPrinter that configured to read those XML files using GPRS and need static IP address to reach destination server. unfortunately this device can only read file from windows server and could not reach the file on Linux server. There is no note in this device manual suggesting Windows server or specific OS! I've also set file permission on Linux server to 777 to have no limitation. what could be the cause of our problem? Thanks.

    Read the article

  • Reason why a Brand new website is ranking for a top keyword? [on hold]

    - by Prasad EBK
    Its been noticed, one of our (new)competitor website is ranking 5 for a top keyword with high competition. The website is barely 2 months old. When I checked not much SEO is done on the website other than basic title/desc tags. No backlinks. The website pushed down our website and took its place for the keyword. The only reason that came to my mind is the latest penguin update. Or is the ranking just temporary???, will it eventually be pushed back?? its been holding on for atleast one month and its irritating. Thanks in advance.

    Read the article

  • Grapeshot crawler ignoring robots.txt

    - by QF_Developer
    Has anyone come across a crawler called Grapeshot? They are hammering the same page repeatedly on our website. I believe they are looking for ad related keywords, based on previous content ad campaigns. The odd thing is we never ran any such campaigns on the page they are so interested in. We do have only a few pages running AdSense, is this what has attracted Grapeshot? I've added the following declaration to my robots.txt, but they don't seem to be honouring it? User-agent: grapeshot Disallow: / Any ideas on how to block this nuisance crawler? I'm starting to think the best way is by setting up IP rules in IIS?

    Read the article

  • Pay for Graphic Designer vs Programmer

    - by FrankSinatra
    In a corporate web-design setup, who typically makes more per hour, the graphic designer or the programmer? By graphic designer, I mean somebody who builds mockups probably in photoshop, selects font-styles, colors, etc. Most things layout-wise are near pixel-perfect, but likely after the initial implementation by the programmer, there will be a lot of small changes directed by the graphic designer. By programmer, I mean somebody who is coding the CSS, the HTML, and light backend support, probably in PHP. The programmer will attempt to duplicate the mockups given the limitations of the medium, and consult with the graphic designer afterwards on what changes are tangible and which are not. Both probably have an undergraduate degree from a respected four-year institution.

    Read the article

  • Oversizing images to produce better looking pages?

    - by Joannes Vermorel
    In the past, improper image resizing used to be a big no-no of web design (not mentioning improper compression format). Hence, for years I have been sticking to the policy where images (PNG or JPG) are resized on the server to match the resolution pixel-wise they will have with the rendered page. Now, recently, I hastily designed a HTML draft with oversized images, using inline CSS style such as width:123px and height:123px to resize the images. To my (slight) surprise, the page turned out to look much better that way. Indeed, with better screen resolution, some people (like me), tend to browse with some level of zoom (aka 125% or even 150% zoom), otherwise fonts are just too small on-screen. Then, if the image is strictly sized, the enlarged image appears blurry (pixel interpolation effect), but if the image is oversized the results is much better. Obviously, oversizing images is not an acceptable pattern if your website is intended for mobile browsing, but is there case where it would be considered as acceptable? Especially if the extra page weight is small anyway.

    Read the article

  • Suggestions required to build an ECommerce Platform

    - by Haris
    For a prospective client we have to offer a solution to provide following system: CMS Order Management Shopping Cart CRM Helpdesk Accounting & Finance Custom Functions In order to save time and to avoid reinvent the wheel our idea is to integrate different off-the-shelf solutions. Their first requirement is that the system has to be hosted in their country which I think will exclude application like Aplicor, Netsuite & Salesforce. Basically the nucleaus would be the CMS which would integrate all the other apps. PHP or .Net based solutions would be our preferences as have inhouse expertise. So far following are few combinations I have come up with: Joomla (CMS) + Virtuemart (Cart+Ordering) + Sugar CRM + Open ERP (finance) + OTRS Magento (CMS+Cart+Ordering) + Sugar CRM + Open ERP (finance) + Helpdesk Ultimate Drupal (CMS) + Ubercart (Cart+Ordering) + Sugar CRM + Open ERP (finance) + Support Ticketing System Sharepoint (CMS) + OptimusBt (Cart+Ordering) + Dynamics CRM + Great Plains + SharepointHQ Dotnetnuke (CMS) + DNNSpot (Cart+Ordering) + Sigma Pro (CRM+Helpdesk) + Open ERP For Helpdesk I liked Zendesk but the server location was the stopping factor, similar for finance and CRM I liked Aplicor. I would not like to go into detailed requirements as it would make things very complex. Could you please suggest me which options are worth enough to start looking into? What other options we have?

    Read the article

  • Directing from a 1und1 hosting solution, with urls intact

    - by Jelmar
    I have done this before on GoDaddy without a hitch, but I cannot seem to figure out this particular case. I have a domain space with temporary url http://yogainun.mysubname.com/ and am hosting the domain name that is to be applied to it at 1und1.de. Right now I have set it up so that from the 1und1 domain name hosting the address http://www.yoga-in-unternehmen.de/ is frame redirected to the subdomain that I just referred to. But this is not what I want. http://www.yoga-in-unternehmen.de/ is to be the domain. With the frame redirect, url's like http://www.yoga-in-unternehmen.de/example-article do not show up. But this is what I want. With godaddy in a similar case, I just turned on DNS and changed the name servers. That worked without problem, but with 1und1 not. Is there something I am missing?

    Read the article

  • How can I exclude content in my notifications bar from being indexed?

    - by Liam E-p
    Of course I want my content to be indexed pretty fast by search engines, however not my notifications bar. My notifications bar contains the last 30 changes to content on the site, and I don't want this to show in my SEO meta. As all the notifications are generic, it often doesn't provide any relevant information. As I said the notifications are generic. If an article named "123" was created, it would create a notification that says "Article "123" was created by xxx at 12:00AM". I'm now wondering if this is a content design problem. As only 1/3 of this information is actually relevant to users (the title, what happened). By SEO meta, and irrelevant notification data being shown, I mean this - Basically what I was wondering, is how I could optimise this, so search engines wouldn't show this generic nonsense.

    Read the article

  • Broken links in content reports when tracking subdomains with Google Analytics

    - by Rob Sobers
    I have a tracking code that I use on my main site and my blog, which is on a subdomain: www.example.com blog.example.com I have a single profile in Google Analytics. I use advanced segments to look at traffic to the main site vs. traffic to the blog. Problem 1: When I'm browsing my content reports under Standard Reporting, the "Page" column doesn't show the top-level or sub-domain, so I can't differentiate www.example.com/index.html from blog.example.com/index.html easily. According to the docs, this filter is supposed to make GA prepend the hostname to the page URL in your content reports, but it doesn't seem to work. Problem 2: When I click on the little "Open in new window" icon next to a given page in a content report line, it always assumes the page lives on www.example.com, so I get 404s when the page is actually on blog.example.com. Is there a good solution for these subdomain tracking problems?

    Read the article

  • Redirect AFTER Initiating download

    - by mashup
    I have a question - is there any way to initiate a download and AFTER the user has confirmed the download then redirect to another site? Is something like that possible via ASP or another language commonly used for websites? Bad PHP "user experience" scenario (In use right now) a) User comes to site, clicks download button b) Users sees "download" landing page, gets redirected after 5 seconds c) Download starts on Thankyoupage Good "user experience" scenario: (my dream solution, what I want) a) User comes to site, clicks download button b) Download starts immediately on landing page c) Download confirmed, redirects now to thank you page Any programming language is a go for this.

    Read the article

  • Assign subdomains to seperate ports on web server

    - by Michael Frank
    I have set up an Abyss web server as a little experiment, and I want to know if it is possible to assign subdomains to different ports on the machine the web server is running on. I have a couple webUIs that I'd like to assign subdomains: 192.168.1.1:8000 becomes example.com/webui1/ 192.168.1.1:8001 becomes example.com/webui2/ The webUIs are available by accessing their ports via example.com:8000. I have tried using a reverse proxy, but it seems that this is only usable on one internal IP at a time. What other options do I have?

    Read the article

  • Can mass different log-in pages result in SEO duplicate and/or low quality punishments?

    - by Noam
    I have internal pages that rely on an external API which I would like to build upon user request. Two options I thought about: Make lots of 'thin' pages that specifies that if you want content about X, you need to log-in, and then the page will be built. Pros: user understands what he'll get when logging in. Cons: SEO implications of such a solution due to the mass 'low quality' and 'cross-sites duplicate content' Make them all redirect to ONE same generic log-in page. Pros: No duplicate low quality content. Cons: Lots of internal links to the same log-in page. Which would you recommend?

    Read the article

  • Advertising Campaign Bidding Network

    - by David
    I hope this is the correct stackexchange to post on. Seems to most relevant. I'm looking for more ways to monetize my site. I'm already using the usual Google Adsense, tradedoublers, ad networks, etc. What I want to know, is if anyone knows of a service/network out there (similar to tradedoubler), which allows you to accept (or bid) on campaigns (primarily CPM based). For example: I log in, I get a list of advertising campaigns that are available and the prices they are willing to pay or I can put a bid in with imp count and cpm rate. I'm looking for something ideally europe or UK based, but im open to others. Thanks!

    Read the article

  • Which JavaScript carousel zooms blocks from the playlist?

    - by Iain Hallam
    I saw a carousel/slider for displaying featured content a while ago that does something that most don't. It started fairly simply, with the top feature large, and a playlist to the side of other featured stories: Feature 1 then began to slide towards the bottom right, while feature 2 moved to occupy the main slot, and the previews of features 3 and 4 moved up: The slider had now completed a whole swap, and was ready to do the same thing with feature 3. My Google-fu seems to be lacking in finding this again; does anyone know of this slider? I think it was based on one of the frameworks, but I'm not sure whether it was jQuery or one of the others.

    Read the article

  • Files not refreshing after uploaded and overwritten via ftp

    - by guisasso
    I have been having some trouble with updating my website this morning. After editing some files, in special, a css file, and uploading it to the server, i would notice that the changes wouldn't happen, as if the file had not been overwritten. File size on my local machine would be 3.244 kb and on the server 3.080 kb. Even after deleting the whole folder itself, and uploading everything again, same error. Answer: Cloudfare.

    Read the article

  • Is this Anti-Scraping technique viable with Crawl-Delay?

    - by skibulk
    I want to prevent web scrapers from abusing 1,000,000 on my website. I'd like to do this by returning a "503 Service Unavailable" error code for users that access an abnormal number of pages per minute. I don't want search engine spiders to ever receive the error. My inclination is to set a robots.txt crawl-delay which will ensure spiders access a number of pages per minute under my 503 threshold. Is this an appropriate solution? Do all major search engines support the directive? Could it negatively affect SEO? Are there any other solutions or recommendations?

    Read the article

  • Local Business Listing Dashboard

    - by Steve
    I operate a website for a West Australian company, and the company has listings in local business directory websites. Currently we are listed in: www.HotFrog.com.au www.Google.com/Places www.TrueLocal.com.au www.StartLocal.com.au www.localstore.com.au www.communityguide.com.au www.yelp.com.au www.aussieweb.com.au Do you know of any method for examining account stats (profile views, profile clicks etc) within a dashboard, so I can see at a glance how each of our listings is going? I'd be happy to build a dashboard if necessary, but I'm not confident I currently have the skills to accurately display only the correct concise information. Would I use iframes? See if they have APIs? Is there a dashboard framework I could use?

    Read the article

  • robots.txt not updated

    - by Haridharan
    I have updated some url's and files in robots.txt file to block url's and files from google search results but, still files displaying in search results. As per a suggestion from a site I tried to update the robots.txt by below steps. In Google Webmaster tools, Health - Fetch as Google - type the url and click the fetch button. but, still files displaying in search results. Note: in Google Webmaster tools, Health - Blocked URL's - robots.txt file - downloaded date looks two dates back.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >