Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 119/216 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • redirect subdomain(weblog) to new domain that can't access .htccss 301

    - by fafa
    I've a problem that I can't find the solution of it in web. I have a blog that has PR 1 and it's subdomain "aaaa.domain.com" that "domain.com" is a blog server. and now i buy a domain "newdomain.com" i want tell google webmaster to redirect old subdomain to this new domain and load trafic to my new domane. but i can't access .htccss to redirect 301. all thing that i can do is put html code in the html . now how can i do this. when i use "Change of Address" in goole webmaster it say:"Restricted to root level domains only" . sorry for bad English.

    Read the article

  • Should I use mod_wsgi embedded mode if I have full control of Apache?

    - by mgibsonbr
    I'm managing a bunch of sites and applications in a shared hosting, using Django via mod_wsgi. I had planned to use daemon mode from the beginning (to avoid restart problems), but ended up purchasing a plan that allows me to run a dedicated Apache instance. I kept using daemon mode for convenience, but I'm afraid it's consuming more server resources than it should (I have different projects for each site, each with its own process and process group), so I'm considering switching to embedded mode. Would that be a sensible thing to do? I'd still be able to restart Apache anytime I need to, and I wouldn't need so many child processes and sockets (so I hope the resource usage would decrease). But I'm unsure whether or not doing so would make it more difficult to manage those sites (if I need to update one, I have to restart all) or maybe the applications won't be properly isolated from one another. Are these problems really significant (or only a minor nuisance), are there other drawbacks I coudn't foresee? I'm looking for advice in any aspect of this setup - mainainability, performance, security etc. Tips for improving the current setup are also welcome (I know how to correctly configure a basic mod_wsgi setup, but I'm clueless about sensible values for threads, processes etc).

    Read the article

  • Static HTML to Wordpress Migration SEO Implications?

    - by Kayle
    Recently, I migrated a client's site to a new server and a new home within wordpress so they could more easily edit their website and start a blog section. The static site was 10 years old a was showing up at place #3 for it's primary keyword, consistently, according to my client, and has dropped to rank #6-8 following the migration. At launch, we made sure the urls were identical (save the removal of ".htm" which we used 301 redirects to compensate for) and we generated a new XML map and pinged google with the new site. We keep a 404 log to make sure we're not losing any incoming links. We also have Google Webmaster Tools on this site and have zero errors/suggestions, everything seems ok. I was told by numerous sources that Google would not penalize us for the use of 301s, but it's the only thing I can think of right now that is different about the site, other than the platform. Any ideas about what we could be getting docked for?

    Read the article

  • Remove scrollbar from iframe [migrated]

    - by Faith In Unseen Things
    Using this code <iframe frameborder="0" style="height: 185px; overflow:scroll; width: 100%" src="http://www.cbox.ws/box/?boxid=439&boxtag=7868&sec=main" marginheight="1" marginwidth="1" name="cboxmain" id="cboxmain" seamless="seamless" scrolling="no" frameborder="0" allowtransparency="true"></iframe> This is how it appears (the shoutbox on homepage of www.talkjesus.com) How do I remove the horizontal scrollbar and modify the css of the vertical scrollbar? Thank you.

    Read the article

  • Website Hosting/Registration [closed]

    - by Ricko M
    Possible Duplicate: How to find web hosting that meets my requirements? I am planning to launch a website down soon. I wanted to know what solutions are available for hosting and registration. Starting with domain registration. Any site you have used/preferred ? I am considering either godaddy or 123reg. Does it even make any difference which you choose? Is there any fine print i need to worry about. I am based in UK , not sure if that helps in resolving any issues if encountered. Does my hosting need to be done at the site i purchased my registration? If not , will there be any transfer fees if i change my hosting? Can I just register the name now and worry about hosting later? At the moment, I plan to have it up and running using either some sort of a tool or a template and perhaps put the bells and whistles down the line. I understand 123 has its own builder tool available, There are a few solutions suggested like wordpress,drupal & jhoomla... I am a C++ developer , not a web programmer, but I do feel the need to open the hood up and make changes if i see fit. So I guess I am looking for a solution where I can easily drag-drop widgets I need and when the time comes customize it. Which CMS would you recommend. Extras: What extras do you need to get , I was suggested to get hold of whois privacy to keep the spambots away, anything else you guys would recommend I keep my eyes open before I sign the dotted line.

    Read the article

  • How to allow Google Images search to by pass hotlink protection?

    - by Marco Demaio
    I saw Google Images seems to index my images only if hotlink protection is off. * I use anyway hotlink protection because I don't like the idea of people sucking my bandwidth, i simply this code to protcet my sites from being hotlinked: RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com$ [NC] RewriteRule .*\.(jpg|jpeg|png|gif)$ - [F,NC,L] But in order to allow Google Image search to bypass my hotlink protection (I want Google Images search to show my images) would it suffice to add a line like this one: RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com$ [NC] Because I'm wondring: is the crawler crawling just from google.com? and what about google.it / google.co.uk, etc.? FYI: on Google official guidelines I did not find info about this. I suppose hotlink protection prevents Google Images to show images in its results because I did some tests and it seems hotlink protection does prevent my images to be shown in Google Images search.

    Read the article

  • Track sales and commission with third-party tool

    - by Andrew
    I have a clothing website where I link to various clothing retailers. I have reached an agreement with one of the retailers whereby they will pay a commission to us for every sale they make from traffic that was referred by our site. I need a mechanism for tracking how much commission should be paid to us, that involves as little work as possible to implement from their side. We both have Google Analytics. Option 1: They record a goal in their GA account whenever someone makes a purchase on their site. They see how many completed goals are marked as referral traffic from our site and calculate commission accordingly. The problem with this is that the whole process of calculating and paying commission will be manual. They will need to frequently check how many sales were generated by referral traffic from our site, and probably we will have to chase them for commission payments. Also - since we won't have access to their GA data - we will need to trust that they report all sales accurately. Option 2: Sign them up to an affiliate network like Commission Junction or Google's Affiliate Network, and connect to them through this network. The problem with this solution is that it seems too heavyweight; ideally we don't want to ask a retailer to go through the whole sign up process just to deal with us and pay us commission. I am assuming that there must be some lightweight service that tracks the number of sales by one site and pays commission accordingly to the other site, where the sign up and installation procedure is simple and fast.

    Read the article

  • My site has crashed .. anyone have some info ?

    - by marwan
    Hi all , I booked a domain name for my website from a hosting provider .I gave the domain name , along with ftp details to a freelancer to develop the site in wordpress . the freelancer developped and he got full payment , and the site and site was working fine ,etc .. From that time , I did not change the admin logging as well as ftp details , this means that such info is still known to the freelancer .. A week ago , I found that some links in my site was not working .. I sent him a mail about this , and he said that he will fix it if i give him ftp details . and I did so , next I found that the entire site is gone . then he sent me a mail , without I asked him , and he he said that there have been someone who got access to my server , and he removed all files of my site and he installed drupal instead .and that he can rebuild the site in one day , by charging a full fee of 250 usd again .. Can anyone know what I can do in this situation , to find who did such act , could it be the host provider or that freelancer ,, and if there is a possibility to have my site back top the server .. I will appreciate any info on this.. Regards , Thanks

    Read the article

  • What is the advantage to hosting static resources on a separate domain?

    - by Michael Ekstrand
    I notice a lot of sites host their resources on a separate domain from the main site, e.g. StackExchange using sstatic.net, Barnes & Noble using imagesbn.com, etc. I understand that there are benefits to putting your static resources on a separate host, possibly with an efficient static-file web server like nginx, freeing up the main server to focus on serving dynamic content. Similarly, outsourcing to a shared CDN like cloudfront Akamai is logical. What is the benefit to using a separate domain otherwise, though? Why sstatic.net instead of static.stackexchange.com? Update: Several answers miss the core question. I understand that there is benefit to splitting between multiple hosts — parallel downloads, slimmer web server, etc. But what is more elusive is why multiple domains. Why sstatic.net rather than static.stackexchange.com as the host for shared resources? So far, only one answer has addressed that.

    Read the article

  • Is it unwise to blacklist an IP address?

    - by hawbsl
    We have a form on a commercial website which has been abused (but only once or twice) by someone from a particular IP address. A colleague wants to blacklist that IP address from the website. Seems to me that's overkill, and that there's a risk that genuine customers sharing that same IP address would be blacklisted too. I suppose a big part of my question is how many people might be sharing that same IP address and could be affected by our blacklist. I suspect that's a "how long's a piece of string" question but some ballpark answer would be really helpful. We're in the UK if that's significant.

    Read the article

  • Why is Google PageRank not showing after redirecting www to non www?

    - by muhammad usman
    I have a fashion website. I had redirected my domain http:// (non-www) to http://www domain and my preferred domain in Google Webmaster Tools was http://www. Now I have redirected http://www to http:// domain and have changed my prefered domain as well. Now Google PageRank is not showing for even a single page. Would any body please help me and let me know if I have done something wrong? Below is my .htaccess redirect code: RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] RewriteCond %{HTTP_HOST} ^www\.deemasfashion\.com$ RewriteRule ^deemasfashion\.com/?(.*)$ http://deemasfashion.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index\.html\ HTTP/ RewriteRule ^index\.html$ http://deemasfashion.com/ [R=301,L] RewriteRule ^index\.htm$ http://deemasfashion.com/ [R=301,L]

    Read the article

  • How do Expires headers and cache manifest rules work together?

    - by Robert K
    I find the W3C's official Offline Web Applications specification to be rather vague about how the cache manifest interacts with headers such as ETag, Expires, or Pragma on cached assets. I know that the manifest should be checked with each request so that the browser knows when to check the other assets for updates. But because the specification doesn't define how the cache manifest interacts with normal cache instructions, I can't predict precisely how the browser will react. Will assets with a future expiration date be refreshed (no matter the cache headers) when the cache manifest is updated? Or, will those assets obey the normal caching rules? Which caching mechanism, HTTP cache versus cache manifest, will take precedence, and when?

    Read the article

  • Is there a way to hide text from descriptions in Google

    - by Linda H
    The first line of text on all of our client's product pages is "Download hi-res images", which of course isn't what we'd want in the description when people search for their products. Is there any way to hide this text/link so that Google and the others just ignore it and go on into the text description below? I suppose we could use a meta-description, but the client isn't very good at computers and it's such a small site it seems silly.

    Read the article

  • Using an old penalized domain for a new website

    - by MiladSafaei
    I had a website with 2 domains like these: firstdomain.com and first-domain.com. The main domain was first-domain.com and the other one was 301 redirected to first one. The main domain got a Google Penguin penalty some months ago. I uploaded the site on an new domain and removed Google index of old domain by using the remove URL tool in Webmaster Tools. Now, I want to use firstdomain.com (which was redirected to the penalized domain) for a new and fresh website with new and perfect content. Is it probable that history of this domain affects the new website and harms its ranking?

    Read the article

  • Webserver insists on opening "blog1.php" instead of "index.php"

    - by pepoluan
    I'm at my wits' end. I have just ripped out a website and in the process of rebuilding everything. Previously, the 'home page' of the website is a blog, with the address "www.mydomain.com/blog1.php". After exporting everything, I deleted the whole directory, and -- based on request -- immediately create a blog/ directory. The idea is to get the blog back up as soon as possible, and temporarily redirect people accessing www.mydomain.com to the blog. Accessing the blog via http://www.mydomain.com/blog/ works. So I put in an index.php file containing a (temporary) redirect to the blog's address. The problem: The server insists on opening blog1.php instead of index.php. Even after we deleted all the files (including .htaccess). And even putting in a new .htaccess file with the single line of DirectoryIndex index.php doesn't work. The server stubbornly wants blog1.php. Now, the server is actually a webhosting, so I have no actual access to it. I have to do my work via cPanel. Currently, I work around this issue by creating blog1.php; but I really want to know why the server does not revert to opening index.php. Did I perhaps miss some important settings in the byzantine cPanel menu page?

    Read the article

  • How can I avoid a 302 for Fetch as Bot?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • iFrame ads from site to site

    - by user28327
    How I can make the iFrame ads from site A to site B not like this: <iframe src="http://www.site.com"></iframe> Example: site A <script type="text/javascript"><!-- google_ad_client = "ca-pub-3434343507"; /* site */ google_ad_slot = "343435270"; google_ad_width = 728; google_ad_height = 90; //--> </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"> </script> site B --iFrame-- <script type="text/javascript"><!-- google_ad_client = "ca-pub-3434343507"; /* site */ google_ad_slot = "343435270"; google_ad_width = 728; google_ad_height = 90; //--> </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"> </script> --iFrame--

    Read the article

  • Evidence for automatic browsing - Log file analysis

    - by Nilani Algiriyage
    I'm analyzing web server logs both in Apache and IIS log formats. I want to find the evidence for automatic browsing, like web robots, spiders, bots, etc. I used python robot-detection 0.2.8 for detecting robots in my log files, but I know there may be other robots (automatic programs) which have traversed through the web site but robot-detection can not identify. So I want to ask: Are there any specific clues that can be found in log files that human users do not leave but automated software would? Do they follow a specific navigation pattern? I saw some requests for favicon.ico - does this implicate that it is a automatic browsing?. I found this article and this question with some valuable points.

    Read the article

  • webmaster tools 500 crawl error for asp faceted navigation that does not exist

    - by user19007
    i am getting 2,500 type 500 url errors in google webmaster tools. These pages are faceted navigation results that can not be reached by a site visitor. These pages do not exist. We are using faceted navigation with the Volusion platform (asp.net, I think). I have specified url parameters in webmaster tools so that google will not try to index anything faceted. This does not stop the errors from generating. I am concerned about how this might effect seo (bleeding page rank). I can provide additional information if needed. I am not sure how to solve this. I have started down the path of creating 301's, but having some difficulty there as well.

    Read the article

  • How much time it needs google webmaster yo generate content keyword if url masking is enabled? [closed]

    - by user1439968
    Possible Duplicate: What is domain “masking” or “cloaking”? Why should it be avoided for a new web site? my real domain is domain.in. But url masking has been enabled and the masked url is domain2.in .. In that case i have added d url bputdoubts.21backlogs.in to google webmaster a week ago but content keyword hasn't been generated. In this case when can I expect to get the content keywords generated ?? And is there a problem for getting visitors from google search if url masking is enabled ?

    Read the article

  • Forum widget for website

    - by Ivan Kuckir
    I have Discussion section at my website and I am getting 3 - 10 comments each week (it may increase in the future). I am using one Disqus "widget" for the whole discussion, but I would like to give it some better structure, with threads, categories etc. Do you know about some "forum widget" service, with functionality of phpBB (threads, categories, ...) and simplicity of Disqus (installing with IFRAME, login with Facebook/Golge, ...) ?

    Read the article

  • How to prevent Google from finding my admin index page?

    - by krish
    I am running a website but for some days i stopped it and put the under-construction page because the Index of admin page is visible to the outside world through the Google search. One of my friend told me that your websites index is visible and its one step away to access the password file and he shows me that very simply using the Google search. How can i prevent this and i am hosting my site with a hosting company and i report about this to them but they simply replied to me still its secure so you no need to worry... am i really don need to worry and continue my site with the visible index of admin page?

    Read the article

  • joomla 2.5 : list links to articles in 2 columns

    - by semyon
    I have a Joomla! 2.5 website, and I have to add a lightbox popup that will contain links to all articles from a specific category(+subs). If anyone can suggest all-in-one solution, that will be great! But generally I'm asking, how to list links to all articles from a specific category and its sub-categories in two/three columns? I know I can set Category blog View to output only article links, without full/introtext, but in this case links will all be in one column. I need at least two. A possibility of grouping article links by subcategory would be great as well (but this is the thing I can live without). How can this be done? I'm looking for: a standard way of configuring this Template override (I'm using T3 framework) Custom extension any other method ...

    Read the article

  • How to measure the conversion rate of your Amazon affiliate program?

    - by user359650
    I plan on selling products through the Amazon affiliate program. What I know I can track is: -what products people view on my website (default Google Analytics pageview behaviour). -what affiliate links people click on my website (with GA _trackEvent). What am I missing is: -what products people end up buying after clicking on the affiliate links. Does the Amazon affiliate program offers you any mechanism for linking a purchase with some data from your website? I noticed that I was able to add custom parameters and values to my affiliate links and the link checker was still happy with them, if Amazon gave the links that initiated an order then I would be able to cross reference the orders using custom parameters...

    Read the article

  • Submitting new site to directories - will Google penalize?

    - by Programmer Joe
    I just started a new site with a forum to discuss stocks. I've already submitted my site to DMOZ. To help promote my site and to help people who are looking for stock discussion forums to find it, I'm thinking of submitting my site to a few more directories but I'm hesistant because I know Google will penalize a site if it believes the backlinks to the site are spammy and/or low quality. So, I have a few questions: 1) If I submit my site to directories with a PR between 4 and 5, will those backlinks be considered spammy/low quality? I noticed most free directories have a PR between 4 and 5, but I don't know if backlinks from those directories would be considered spammy by Google. 2) I'm thinking of submitting it to Best of the Web and JoeAnt, but these are paid. Does anybody have any experience with these two paid directories? Are these two directories considered higher quality by Google?

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >