Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 62/216 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • What nameserver should I use?

    - by Qmal
    Let's say that I have site.com website that I bought at one place, but want to host on another place. I don't know what to do. Here is scenario. I bought site.com on company that is using their own nameservers - ns.a.com. And linked website to their own servers. I bought hosting on another company that is using nameservers - ns.b.com. Should I just change ns.a.com on my DOMAIN to ns.b.com? Or should I link all DNS entries on my domain control panel to host ip addresses?

    Read the article

  • DNS query re website Status: inactive

    - by Matthew Brookes
    There is a website that I am assisting with which, when you do a DNS look up on Who.is, returns a Website Status of "inactive". I also noticed the server type is incorrectly reported. This is not a website I generally use for DNS queries so am unsure if it is reliable. Using other DNS checking services reports what Iwould expect and the site is functioning correctly. Research I have done with regard to Website Status: inactive suggests an issue with the DNS configuration? I am looking for help understanding if this is something to be concerned with and if possible how to update this value or how it gets set in the first place.

    Read the article

  • Is there a way to disallow only crawling in https in robots.txt?

    - by David Wilkins
    I just realized that Bingbot is crawling my company's website's pages over https. Bing already crawls the site over http, so this seems frivolous. Is there a way to specify Disallow: / for https only? According to Wikipedia, each protocol has its own robots.txt And according to Google's Robots.txt Specification, the robots.txt applies to http AND https I don't want to Disallow: / for Bing totally, just over https.

    Read the article

  • Is pixelmator a viable alternative for photoshop? [migrated]

    - by ChrisR
    I've always been a photoshop user, i know the ins and outs and know my way around all the tools i need for my webdesign work. But now i'm faced with a dilemma, for my new job i haven't got the budget for a full photoshop license so i'm wondering, is pixelmator a good alternative? I use Photoshop mainly to slice a design into separate images so enable/disable layers is a must, PSD compatibility too, ... Anyone has experience with Pixelmator?

    Read the article

  • Symbolic link not allowed or link target not accessible: /var/www on Ubuntu 11.04

    - by Jamie Hutber
    I am getting a 403 when i access http://mayfieldafc.local/ upon looking in the apache logs i am getting [Wed Nov 16 12:32:59 2011] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www I have what i believe to be the correct permissions set on /var/www. hutber can create and delete files, hutber being my user. I can also execute as program on this folder. in mayfields vhost its: <Directory /var/www/mayfieldafc/docroot> Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> I am pulling my hair out not being able to work on my sites with my work ubuntu install. I know of nothing else that could be effecting this. So any ideas?

    Read the article

  • Ars Technica .ars URL suffix -- Vanity or SEO Benefit?

    - by yc01
    The Technology website Ars Technica has adjusted their URL rewrite rules to end with a .ars. Traditionally, sites have taken advantage of this URL rewriting capability to completely eliminate file suffixes like .html, .php, .aspx etc, under the theory that this made for better SEO (since the content of the URL was more relevant to the content) Ars Technicas, though, look like this: http://arstechnica.com/science/news/2011/03/flow-from-the-poles-drive-sunspot-levels.ars So, is Ars Technica adding the .ars file suffix purely a vanity play? Or is it an SEO trick to improve the site's SEO by cleverly inserting their site name into every URL slug? And, if this is indeed an effective SEO trick, should other sites follow suit?

    Read the article

  • Best Web Site Copying Software

    - by GregH
    I just wanted to get some opinions on the best "web site copying" software out there (free or commercial is fine). I have a site that I've recently become responsible for managing, and the previous consultant has not provided operating system access. As such, the plan is to re-host the web site. I realize there are a lot of different issues to consider in doing this. However, I don't have much choice in the matter now. The plan is to use web site copying software (ala HTTrack) to "rip" the web site, and then modify what is downloaded back in to a maintainable site. This, of course, involves HTML, css, javascript, etc on the front-end. I'd like to recover as much of the site as possible to make re-creating it as easy as possible.

    Read the article

  • Using a SMTP Service for email

    - by Josh S.
    This may be a horribly obvious question, but I'm learning and just need someone to confirm it for me. I putting together a private social network that needs to email their members (through the social network software, Elgg) regularly. I'm hosting it on a shared HostGator plan (because they won't receive much traffic) and they'll email 10-1000 emails a few times a week. HostGator restricts you to 500 per hour. I'm also worried about deliverability. I've been searching up and down about how to throttle the emails so it will all send reliably... but then I came across the idea of an outside SMTP relay service. Would using an SMTP service resolve this issue? If so, any opinions on quality SMTP services?

    Read the article

  • Which token from a long User-Agent should I use in robots.txt?

    - by Gaia
    The definition of User-Agent states that several tokens can be included, as deemed necessary by the client. I want to block certain bots via robots.txt and I am confused as to which part of the User-Agent string to use, especially for more obscure bots. For example: Mozilla/5.0 (compatible; uMBot-LN/1.0; mailto: [email protected])" JS-Kit URL Resolver, http://js-kit.com/ Mozilla/5.0 (compatible; SEOkicks-Robot +http://www.seokicks.de/robot.html Do I use the second token? Can tokens contain spaces, or did the SEOkicks folks forget a semicolon after SEOkicks-Robot? I don't actually intend on making my question specific to a couple bots - I want to know the guideline: which part of UA do I place in robots.txt for these exotic bots with UA as long as a haiku? User-agent: uMBot-LN/1.0 Disallow: / PS: Thank you but I do not need to hear that undesirable bots are better blocked with mod_security. I already have commercial mod_sec rules in place.

    Read the article

  • Why are intermittent pages disappearing from SERPs?

    - by Beebee
    I have a very basical informational site with about 50+ pages. Specifically, there is a page about each site. I have been tracking pages daily for months and they have been streadily increasing in the SERPs. Suddenly, about 3-4 weeks ago, about 20 pages were completely missing from the SERPS, though they were still indexed (I could google a specific text from the page in quotes and it would appear). Since then, pages have been added and removed from SERPs continually, cutting my traffic by about 50%. All I have been adding is information about the sites topic. I haven't stolen content or done anything else that seems like it would cause this.

    Read the article

  • Monitoring Domain Availability

    - by JP19
    How can I write a tool to monitor domain name availability? In particular, I am interested in monitoring availability of a domain which is in PENDINGDELETE (or REDEMPTIONPERIOD or REGISTRY-DELETE-NOTIFY or PENDINGRESTORE or similar ) status after its expiration date. Any suggestions or more information about the PENDINGDELETE and similar status are also welcome (what is the time frame till which it can remain in this status, etc. I usually don't see a fixed pattern or even consistent correlation with expiration date and this status).

    Read the article

  • Magento Bulk Product Import + Modules Nightmare

    - by mike
    Have 5,000 products in CSV file File has been re-saved as UT8 file in google documents and exported to CSV from excel File loads perfectly with all fields in demo of magento store manager (except we dont want to buy it:) When trying to upload in regular Magento..keep getting error messages on column duplicates....yes we have hundreds of duplicates as the titles of products in fields correspond with different sizes, etc...no way around it Any solutions around this or any open source software similar to store manager that can do the trick. Ready to give up and go to paid solution such as Big Commerce Also, Uploaded a bunch of free modules/keys...of open source bulk import products from magentocommerce but I cant find them anywhere in the main admin panel to use...there is no menu item for them anywhere??

    Read the article

  • Firefox stalls on rendering when chrome doesn't

    - by amccormack
    I have a webpage that loads quickly 100% of the time in chrome, but only 10% or so of the time in Firefox. Looking at the fiddler capture, Firefox only loads 2 of the 100ish files being pulled before it hangs. The error does not seem to be on the server or network side, however, because Chrome never encounters a problem. How do I find the root of this stall? While I suspect Firefox's javascript execution is what is causing the hang, are there any particular methods to narrow down the search for the bad code?

    Read the article

  • How to develop Online Shopping Portal Application using PHP ?

    - by Sarang
    I do not know PHP & I have to develop a Shopping Portal with following Definition : Scenario: Online Shopping Portal XYZ.com wants to create an online shopping portal for managing its registered customers and their shopping. The customers need to register themselves first before they do shopping using the shopping portal. However, everyone, whether registered or not, can view the various products along with the prices listed in the portal. The registered customers, after logging in, are allowed to place order for one or more products from the products listed in the portal. Once the order is placed, the customer gets a reference order number and the order status should be “order in process”. The customers can track their order using the given reference number. The management of XYZ.com should be able to modify the order status of a particular reference order number to “shipped” once the products are shipped to the shipping address entered by the customer at the time of placing the order. The Functionalities required are : Create the interface for the XYZ.com shopping portal using HTML/XHTML and CSS. Implement the client side validations using JavaScript. Create the tables using MySQL. Implement the functionality using the server side scripting language, PHP. Integrate all the above tasks and make the XYZ.com shopping portal functional. How do I develop this application with following proper steps of development ?

    Read the article

  • Stop Apache serving filetypes

    - by ProfSmiles
    Preferably using .htaccess files, though .conf files are an option, is there any way to stop Apache serving certain filetypes? For example, .db shouldn't be served for obvious reason (privacy and whatnot, etc.), so could I make them show as a 404 but still have them available for my CGI scripts? Putting these sensitive files in a directory other than /public_HTML/ is also an option, though I like having them in the same directory as the scripts for ease of use. Cheers

    Read the article

  • How to remove duplicate content, which is still indexed, but not linked to anymore?

    - by David
    A bug in the tool, which we use to create search-engine-friendly URLs changed our whole URL-structure overnight, and we only noticed after Google already indexed the page. Now, we have a massive duplicate content issue, causing a harsh drop in rankings. Webmaster Tools shows over 1,000 duplicate title tags, so I don't think, Google understands what is going on. Right URL: abc.com/price/sharp-ah-l13-12000-btu.html Wrong URL: abc.com/item/sharp-l-series-ahl13-12000-btu.html (created by mistake) After that, we ... Changed back all URLs to the "Right URLs" Set up a 301-redirect for all "Wrong URLs" a few days later Now, still a massive amount of pages is in the index twice. As we do not link internally to the "Wrong URLs" anymore, I am not sure, if Google will re-crawl them very soon. What can we do to solve this issue and tell Google, that all the "Wrong URLs" now redirect to the "Right URLs"? Best, David

    Read the article

  • Cloud computing - database loading question

    - by workwise
    Following is the situation, I want to know whether what I want is possible in cloud computing and is it the best way for me: 1) My main site has a Database with tables with millions of rows, and entries are added almost every second. 2) I will setup a mysql mirror, so there will be a backup database always in sync with the main one. 3) There are few tens of thousands of images- growing. So say total size of images few tens of gigabytes. I will be keeping the image data also in sync on the backup server. 4) There can be short periods where traffic can go 100X the average traffic. 5) I will be using memcache heavily - most database and even frequently used disk files/images will be in RAM. I want that the main site runs on a dedicated server. The backup server is say an Amazon EC2 instance. Now note that since it is live backup, I need to run a small instance continuously. I want that when I anticipate high traffic, I should be able to run a large instance on the cloud and transfer the traffic there. The main point is - I do not want to spend time in "loading" the database on the large instance, as it typically can take few minutes or even hours (experience). So is it possible to just scale the memory/CPU on demand, and not having to load the database or sync up the filesystem? I want to setup my backup scripts etc just ONCE. Thanks JP

    Read the article

  • Why do non-working websites look like search indexes?

    - by Asaf
    An example of this could be shown on every .tk domain that doesn't exist, just write www.givemeacookie.tk or something, you get into a fake search engine/index Now these things are all over, I use to always ignore it because it was obvious just a fake template, but it's everywhere. I was wondering, where is it originated (here it's searchmagnified, but in their website it's also something generic) ? Is there a proper name for these type of websites?

    Read the article

  • Making fonts render similarly across browsers

    - by Zach L.
    I am building a website for a client, and we had hoped to use plain text, not images in the navigation bar. The font we are using is Century Gothic (I believe that this font is available on the majority of PCs and Macs) The problem is, that on different browsers the font renders significantly differnt. In Chrome we got it looking the way we want, but in firefox the text is smaller and bolder. Aside from writing browser specific javascript to alter the font properties, are there any other options to standardize the way the fonts are rendered cross-browser. Perhaps some library or API? Maybe its a matter of being more specific in declaring font properties? Honestly I am stuck and need help.

    Read the article

  • Why do none-working websites look like search indexes?

    - by Asaf
    An example of this could be shown on every .tk domain that doesn't exist, just write www.givemeacookie.tk or something, you get into a fake search engine/index Now these things are all over, I use to always ignore it because it was obvious just a fake template, but it's everywhere. I was wondering, where is it originated (here it's searchmagnified, but in their website it's also something generic) ? Is there a proper name for these type of websites?

    Read the article

  • cPanel email doesn't seem to work - error 550?

    - by Megh
    I am fairly new to the web hosting game, so bear with me :) Recently set up a VPS with cPanel and WHM. Everything is going well so far, I've created a user domain and transferred my website there, managed a couple of databases with phpmyadmin, everything was going great until I started messing around with email. I made an email account [email protected] through cPanel, although when I try and email this address I get the following error: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 Unknown user (state 13) Quite unsure of what to do next, in all honesty.

    Read the article

  • Scripts Casing Flash Intro Animation To Stop [migrated]

    - by ubique
    When my Flash website loads, it freezes halfway through the initial animation for 2-3 seconds and then continues. This obviously doesn't look great and I can't figure out what is causing it. Am thinking it is one of the scripts in index.html causing the issue and have tried all sorts of ways to correct it - what have I done wrong? <!DOCTYPE html> <html lang="en"> <head> <title>company name</title> . . . <link href="style.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="js/flashobject.js"></script> <!--[if lt IE 7]> <link href="ie6.css" rel="stylesheet" type="text/css" /> <![endif]--> </head> <body> <header> <hgroup> <h1>company</h1> <h2>company</h2> </hgroup> </header> <div id="container"> <div id="head"> <div class="aligncenter"><a href="http://www.adobe.com/go/EN_US-H-GET-FLASH"> <img src="http://www.adobe.com/images/shared/download_buttons/get_adobe_flash_player.png" alt="" /></a> </div> </div> </div> <div class="g-plus" data-href="https://plus.google.com/100925740920754223119?rel=publisher" data-width="170" data-height="69" data-theme="light"> </body> <!-- Flash --> <script type="text/javascript"> var fo = new FlashObject("main_v10.swf", "head", "100%", "100%", "8", ""); fo.addParam("quality", "high"); fo.addParam("allowFullScreen", "true"); fo.write("head"); </script> <!-- Hello Bar --> <script type="text/javascript" src="//www.hellobar.com/hellobar.js"></script> <script type="text/javascript"> new HelloBar(39040,52484); </script> <!-- GPlus --> <script type="text/javascript"> window.___gcfg = {lang: 'en'}; (function() {var po = document.createElement("script"); po.type = "text/javascript"; po.async = true;po.src = "https://apis.google.com/js/plusone.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(po, s); })();</script> <!-- Google --> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxxxx-1']); _gaq.push(['_setSiteSpeedSampleRate', 10]); _gaq.push(['_trackPageview']); (function init() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga,s); })(); window.onload = init; </script> </html>

    Read the article

  • How to Export Email "Sent" Folder?

    - by user249493
    A client had her web site and email hosted at "company A". She was switching to "company B" but didn't want to lose her email. I set up a Gmail account, POPed into her webmail account, and pulled the entire inbox into Gmail (for later transfer to her new host). But I forgot about the "sent" folder. Although the hosting plan is still up and running, she changed her domain record to point to the new host. So I can't access the old webmail account via POP or IMAP because the email address needed for authentication now resolves to the new host. Is there any way I can get the contents of the sent folder without having to do a "forward" one message at a time (there are hundreds)?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >