Search Results

Search found 14789 results on 592 pages for 'pro backup'.

Page 205/592 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Using YouTube as a CDN

    - by Syed
    Why isn't YouTube used as a CDN for video and audio files? Through YouTube's api and developer tools, it would be possible to post all media files to YouTube from a CMS and then make a call to them when needed. This seems like it's within YouTube's TOS, it's a cost-effective way to store, retrieve, and distribute media files, and it could also make for easy monetization. I ask because I'm working on a new project for a public radio station. I can't figure out the real downside to this sort of an implementation.

    Read the article

  • Where does the URL parameter "?chocaid=397" come from?

    - by unor
    In Google Webmaster Tools, I noticed that my front page was indexed two times: example.com/ example.com/?chocaid=397 I know that I could fix this with the use of link type canonical, but I wonder: Where does this parameter come from? There are various sites that have pages indexed with this very parameter/value: https://duckduckgo.com/?q=chocaid%3D397. I looked for similarities between these sites. but couldn't find a conclusive one: It's often the front page, but not in every case. Some are NSFW, but not all. When one domains' URL has this parameter, often other subdomains of the same domain have it, too. Examples Wikipedia entry Microsoft Codeplex

    Read the article

  • how to start sendmail - WP email turned off

    - by wejrowski
    I have a WP site on a linux box. Our email was working fine in Wordpress but recently it stopped, I think because of a restart. All I could think of was to restart sendmail. I couldn't find sendmail in the normal directory (/etc/init.d/sendmail restart) but that didn't exist. I found another directory for sendmail in the sbin but every time I try running it it doesn't respond and I have to exit. This is all what I tried. Any ideas? [root@li209-134 ~]# /etc/init.d/sendmail restart -bash: /etc/init.d/sendmail: No such file or directory [root@li /]# find . -name sendmail -print ./usr/sbin/sendmail ./usr/lib/sendmail [root@li /]# ./usr/sbin/sendmail restart ^C [root@li /]# sudo /usr/sbin/sendmail restart ^C [root@li /]# sudo service sendmail start sendmail: unrecognized service [root@li /]# /usr/sbin/sendmail start ^C [root@li /]# /usr/sbin/sendmail ^C [root@li /]# /usr/lib/sendmail start ^C

    Read the article

  • Where can I download list of all .com domains registered in the world

    - by John
    I just need registered .com domain names. I know this list is available at: http://www.verisigninc.com/en_US/products-and-services/domain-name-services/grow-your-domain-name-business/tld-zone-access/index.xhtml ( looks like it could take 4 weeks for approval) http://www.premiumdrops.com/zones.html Also I can extract domain names using domain search API at domaintools.com Is there any other source where I can find this list?

    Read the article

  • Better ways to have valuable data indexed, which is ignored currently

    - by Sam
    <a title="">.../a> Hi folks. It seems that my title tag which holds extremely valuable and describes contents on my simple design page is currently compeltely denied by search engines and not indexed at all!! Those descriptions should however be indexed as the describe valuable portions to an otherwise empty page with clean glossary (thats neat and organised to the eye of the viewer. So putting all that descriptive data into visible space would ruin the designish less is more fundamental... So, which alternatives to the title tag do I have, in order to put important contents that are relevant for both user as well as search engines? A <a name="">......</> B <p name="">......</> C <a alt="">.......</> D <p alt="">.......</> From the above list, arose my question: Which of the above is advisable alternative in order to get the valuable actual content indexed? Should it be in a a tag or p tag? Or are there even better tags for this which still keep layout clean? You suggestions are Much appreciated!

    Read the article

  • AdSense CPM and content topics

    - by Silver Moon
    I run a few blogs on topics like programming, linux tips and network security. I noticed the following: Till last year had only 1 blog that had posts on PHP, linux tips, network security etc. The AdSense RPM was around 1.00. Then I split the content into 3 separate blogs, one focused on web development/PHP/MySQL. second one focused on Linux/Windows how-tos and tips and third one focused on network security and related network topics. The Adsense RPM rose significantly for 2 blogs, and was 1.38 (PHP blog), 0.87 (tech tips blog) and 1.90 (network security blog). In the month of april 2013 the site on network security had the highest traffic and the Adsense income of that site was twice that of all three sites combined previously. My question is simple, does focusing on one topic lead to higher CPC/CPM?

    Read the article

  • 3 Scenarios for most relevant keywords in website. Which one is best?

    - by Sam
    A webpage about Tomato Soup has either of three following filenames: Scenario 1 website.org/en/tomato-soup or Scenario 2 website.org/en/tomato-soup-healthy-soups-recipes or Scenario 3 website.org/en/tomato-why-sandra-is-so-wild-about-her-healthy-tomato-soup-recipes Q1. Which one of the abobe would You go for? Q2. Which one of these would be ranked as most relevant by google? Q3. Would either of these be penalized for keyword stuffing?

    Read the article

  • Extend depth of .htaccess to all subfolders and their children

    - by JoXll
    I need to be able to use .htaccess in all subfolders for full depth. E.g. I have .htaccess in public_html folder: \public_html\.htaccess How I make it to work for the folder small as well? \public_html\home\images\red\thumbs\small\ It only enforces up to home directory not more. ErrorDocument 403 http://google.com Order Deny,Allow Deny from all Allow from 11.22.33.44 Options +FollowSymlinks RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L]

    Read the article

  • One domain and multiple website in folders

    - by User1212
    I am going to create a network with one domain, e.g. example.com then going to manage my websites in folders. Look below for example: www.example.com/market www.example.com/freebies www.example.com/personalblog www.example.com/shop Consider that all four websites have different design and codes. From SEO perspective, is it recommended or I should use subdomains or buy four domains for each website?

    Read the article

  • How to create sitemap for my shopping site?

    - by John Sanjay
    I have one shopping site related to Home Goods and I need to create and submit the sitemap of my site in Google Webmaster Tool. I know there are several online tools to generate XML sitemap but some one told me that, Shopping site's sitemaps are different than other sites which means we have to submit sitemaps in two format. One is static page site map and another one is dynamic product page sitemap. Is it true? If so how create sitemaps in these two formats?

    Read the article

  • redirecting in node.js behind mod_rewrite proxy

    - by chmanie
    I have a node.js application running behind an Apache mod_rewrite proxy configured in a .htaccess file like this: RewriteCond %{HTTP_HOST} =mydomain.com [OR] RewriteCond %{HTTP_HOST} =www.mydomain.com RewriteRule (.*) http://localhost:3000/$1 [QSA,P] When I now do a redirect (e.g. express' res.redirect()) inside my node.js application (which runs on port 3000), the user is always redirected to http://localhost:3000/ (which is in fact exactly what is defined above but not the desired behaviour). Is there any way around this?

    Read the article

  • Create speed baseline for local web file

    - by Michael Jasper
    Is there any tool or method that will load a localhost page a number of times, and return the averaged data for load times, onload events, Dom ready events, etc? I'd like to work on page speed optimization, but need a baseline before I begin. I have used both Google analytics and Webmaster tools, but I'd like an automated solutions that runs locally. My ideal solution would be a program or script that would take the path/file, number of iterations, and then take several minutes to load the page n times without cache and crunch numbers to create a baseline.

    Read the article

  • How can I allow robots access to my sitemap, but prevent casual users from accessing it?

    - by morpheous
    I am storing my sitemaps in my web folder. I want web crawlers (Googlebot etc) to be able to access the file, but I dont necessarily want all and sundry to have access to it. For example, this site (superuser.com), has a site index - as specified by its robots.txt file (http://superuser.com/robots.txt). However, when you type http://superuser.com/sitemap.xml, you are directed to a 404 page. How can I implement the same thing on my website? I am running a LAMP website, also I am using a sitemap index file (so I have multiple site maps for the site). I would like to use the same mechanism to make them unavailable via a browser, as described above.

    Read the article

  • Analyze Drupal and Wordpress sites CPU load in shared server

    - by Tedi
    Our hosting company is complaining that both our Drupal and Wordpress websites running in a shared server are consuming too many CPU resources. The traffic for each site is not more than 100 users per day and, at a first glance, we don't have very many plugins/add-ons. Is there any tool or resource to analyse what is causing that high CPU load? Thanks Update: We decided to suspend our accounts while the problem was being debugged but still our hosting (Site5) said that they saw unacceptable activity on our sites so we had to move to a dedicated server... asked them several times to provide us with more information and they always came back saying that we had to purchase a higher account. Finally decided to move to another hosting service.

    Read the article

  • How to spread XML Sitemaps over several webservers behind AWS loadbalancer?

    - by Jurik
    We have a web portal with almost a million products and way more other urls. I wrote a script that checks database. If there is a new url needed or an old one update, this script will update/create the XML Sitemaps. But we have several servers behind the load balancer at our rented AWS space. Further this script checks database for each url if there was an update so that it updates the appropriate xml file too. My question is how to spread those XML Sitemaps over all webservers behind this AWS load balancer? Our approaches/ideas: we could just generate them on one server with a cron job and copy them to the other servers, but this could be difficult because of automatic raising numbers of servers and so on. we put them on our S3 - but this one is not avaible thru our domain, so I guess google will have a problem with it I let my script run on every webserver but change it in a way that it will generate each time all xml files if they do not exist. But then I would have conflicts with updated URLs in my database, where I saved timestamp of last changed value of every url Is there another - better - solution that I do not know? Are there any special services by amazon for such cases?

    Read the article

  • How long should my Html Page Title Really be?

    - by RandomBen
    How long should my text within my <title></title> tags really be? I know Google cuts it off at some point but when? When I used IIS7's SEO Toolkit 1.0 I get error stating my title should be under 65 characters. I have a book by Bruce Clay that states I should use from 62-70 characters and roughly 9 +/- 3 words. I also have used SenSEO's Firefox Add-on and it states I should use a max of 65 characters or roughly 15 words. What is the max really? I have 2 sources saying 65 and 1 saying 72 but Bruce Clay is generally kept in high regard.

    Read the article

  • Customising Google Maps breaks highway label blocks

    - by user2248809
    I'm trying to customise a Google map to use shades of a particular colour. It's working nicely except the blocks that contain major road names / numbers is illegible. I've figured out how to target styles to those elements, but setting the 'color' value sets both text and background to that colour. And no adjusting of saturation, gamma, lightness etc seems to make the text legible. function initialize() { var latlng = new google.maps.LatLng(50.766472,0.284732); var styles = [ { stylers: [ { "gamma": 0.75 }, { "hue": "#607C75" }, { "saturation": -75 }, { "lightness": 0 } ] },{ featureType: "water", stylers: [ {color: "#607C75"} ] } ]; var myOptions = { zoom: 15, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP, }; var marker = new google.maps.Marker({ position: latlng, title:"Living, dining, bedrooms by David Salmon" }); var map = new google.maps.Map(document.getElementById("map"), myOptions); map.setOptions({styles: styles}); marker.setMap(map); }

    Read the article

  • Does a "nofollow" attribute on a link prevent URL discovery by search engines?

    - by Stephen Ostermiller
    I know that nofollow prevents link juice from being passed across a link. But if search engine robots discover a link with a nofollow on it, will they add that link to their crawl queue? In other words, if I create a link to a brand new page and put a rel=nofollow attribute on that link, will it prevent search engine bots (particularly Googlebot) from crawling the page. (Assuming that this link remains the only link into that page.) I've read conflicting reports about this over the years and I'm looking for authoritative references about the current state of affairs. Official statements from Google or published results of independent testing would be ideal.

    Read the article

  • Best ways to collect location-based user input

    - by user359650
    I'm working on a website where users will be able to register and provide information about their location. In order to prevent users from inputting incorrect data, we don't want users to provide free-text information but instead choose from predefined values as much as possible. We believe there are 2 ways of providing those values: use an API to an external service provider or create your own local database. APIs Some resources: - https://developers.facebook.com/docs/reference/ads-api/get-autocomplete-data/ - http://developer.yahoo.com/geo/geoplanet/ Pros: -accuracy and completeness of data. -no maintenance related to update of data as this it taken care of by API provider. -easier/faster to get started (no need to create local database, just implement API). Cons: -degradation of performance when availability issues with external API. -outage due to changes to the external API (until your code is updated to reflect those changes). -lock-in with external provider. Local database Some resources: - http://developer.yahoo.com/geo/geoplanet/data/ - http://www.maxmind.com/app/geolitecity - http://download.geonames.org/export/dump/ Pros: -no external dependency: improved stability and performance. Cons: -more work to get started (you need to create the database and code to interact with it). -risks of inaccurate/incomplete data, either initially or over time. -more maintenance work to keep database up to date. Assuming the depth information requested from users is as follows: -country: interested in value. also used to narrow down list of regions. -region (state in the US, county in the UK...): not interested in value itself, only used to narrow down list of cities. -city: interested in value (which can be used to work out related region should we need regional statistics). -address: interested in value although OPTIONAL. Which option (whether API or local database) would you choose? What tips you would give for the implementation? What other resources can you share?

    Read the article

  • How can I host a website on a dynamically-assigned IP address?

    - by nick
    I recently upgraded my internet to the point that it is much faster and more reliable than my current webhost. I would like to move my current domain to be hosted at home, but my IP address is dynamic. As far as I know, I only get a new IP when I restart my modem and or router (which is almost never) or when cable one (my ISP) pushes out a firmware update (rarely). There are a few ways I can see doing this: Convince my ISP to give me a static IP Assign my router my current IP to force a static IP (which might work?) Set my DNS record to my current IP address and update it on the rare occasions that it changes. Obviously I'm hoping that the first one works, but I don't want to pay a lot of extra money (if that's what it takes) to get a static IP address. Which of these options will work most reliably?

    Read the article

  • Is Paypal the best solution for payment gateway for a website?

    - by Pennf0lio
    I have a realty website that needs a payment gateway for their property reservation. The reservation fee range from $500-$600 and about 5-6 people per month. I was wondering if Paypal is the best solution for accepting Payment. What will be the Pros and Cons using Paypal. Paypal was my first choice because It's easy to integrate on my existing website and I wouldn't be minding so much on the security. P.S. It's not a part of the question, But If you can site some realty website that accept payment and would be a good inspiration. It would be highly appreciated. Thanks!

    Read the article

  • How to register properly to the most famous SEOs? [closed]

    - by Olivier Pons
    I know it may have been asked many times, but here's my question: I'm about to open my website which I'm more than proud of (I'll talk about its capabilities on my blog). Anyway I want it to be registered by all the most famous SEOs and to be fetched often because it may grow up quickly. I know that a lot of people may have already asked this question but nevertheless I didn't find something relevant to that. I just want to know where I should register on all major SEOs when I release a website. Maybe this is a wiki, but I didn't find anything helpful on the subject. Any advice welcome.

    Read the article

  • Unindex google code svn repository content from google index

    - by matcheek
    I developed a small web site and saved the code to google code repository. Everything has been running smoothly for a while until results from google code svn repository started showing up before the results from the actual website. Is there any way I could stop google from indexing google code repository content or at least make its rank lower than the web site? I am not talking sophisticated seo techniques but rather some simple settings if there are any.

    Read the article

  • beginner - best way to do a 'Confirm' page? [closed]

    - by W_P
    I am a beginning web app developer, wondering about the best way to implement a "Confirm Page" upon form submission. I have heard that it's best for the script that a form POSTs to to be implemented by handling the POST data and then redirecting to another page, so the user isn't directly viewing the page that was POSTed to. My question is about the best way to implement a "Confirm before data save" page. Do I Have my form POST to a script, which marshals the data, puts in a GET, and redirects to the confirm page, which unmarshals and displays the data in another form, where the user can then either confirm (which causes another POST to a script that actually saves the data) or deny (which causes the user to be redirected back to the original form, with their input added)? Have my form POST directly to the confirm page, which is displayed to the user and then, like #1, gives the user the option to confirm or deny? Have my form GET the confirm page, which then does the expected behavior? I feel like there is a common-sense answer to this question that I am just not getting.

    Read the article

  • Do I create new site or add to existing site?

    - by nitbuntu
    Hi, Suppose, as an example, I have a website with the address, www.cool-gifts.com and I'm getting regular sales and its a worthwhile site, but no great fireworks. After research I find that there is a great market for '2nd hand stuff' and I'd like to serve that market. Would it be best to add '2nd hand stuff' as an additional category of gifts in my existing site....or, since the 2nd hand stuff is a market in itself, would I be better off investing time and energy bringing up a whole new site (www.used-stuff.com)? If I had employees and financial resources, it probably would be a no-brainer...start a new site. But, what if you are a small guy, with limited resources? So...new site....or add to existing site?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >