Search Results

Search found 11671 results on 467 pages for 'man pages'.

Page 221/467 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • SEO: Is promoting your backlinks a good strategy for improving search results for my site's name?

    - by user4394
    I run a website that's been around for about three years in the sports space. I am successfully ranking well for targeted keywords, but searching for the name of my site itself returns very poor results - it shows my site, its FB/Twitter, and then 15 pages of unrelated spam that happen to contain two words that, when combined, form my website's name. After that, my backlinks begin to show up spordically. As far as I can tell, I simply don't have enough backlinks and the backlinks I do have are ranked worse than the spam. (Site Explorer lists 200 external links to any page on our domain and 20 external links directly to the front page). To counter this, my strategy is to promote my backlinks so they get a better page rank than the spam. Does that make sense? Am I going in the right direction or should I just focus on getting more backlinks pointing directly to my site? Thanks in advance and I'd be happy to answer any questions I can (without giving away my site of course).

    Read the article

  • Is doing AB Tests using site redirection a bad practice?

    - by user40358
    I'm developing hotels websites here in Brazil. When the site is done, we do an AB test with the old version to measure conversion and show to the hotel owner how good our site is. Due to the fact that I cannot put the old site inside the new one as a subresource (newone.com/old), currently I'm doing those AB test as follows: 1) I create 2 Google Analytics accounts, one for each site (old and new); 2) I put the GA tags in the old website pages (changing its possibly existent GA ID to the just created one); 3) I put an Javascript code that redirects the user to the old website (in a different URL and different domain) with 50% of probability. So I compare all the metrics, events and goals between those two GA accounts. How bad is it? How Google can interpretate the fact of being, sometimes redirected, sometimes don't? The experiment usually runs for 2 weeks. Is there any other alternative for doing this in a better way?

    Read the article

  • Can we 301 redirect to a new page, but still publish the old content somewhere else?

    - by KBS
    We have a page on the site which ranks well for an SEO term (top 5) but contains old information. We have added a new page but Google doesn't rank it that well. Information on these pages is time sensitive. Old: example.com/2013-related-information.html New: example.com/2014-related-information.html Obvious solution is to delete old page and do a 301 redirect to the new page. Now, can we still keep the old page by giving it a new URL. example.com/2013-related-information.html is redirected to example.com/2014-related-information.html example.com/2014-related-information.html is recreated with a new address such as example.com/new-2013-related-information.html What we are trying to do is to send the user to the fresh page but still not destroying the record copy if someone wants to go and dig up the old information.

    Read the article

  • Internet connection crashes while wifi / lan stay connected

    - by pirad
    I am connected to a router by wifi or lan. But every few moments (sometimes seconds, sometimes half an hour) my connection to the internet breaks down. So in one moment I can surf with my firefox and have ping times of 20 ms (wifi) to the google server. And in the next moment I can't open any pages in any browser and the ping shows no new messages first and then "Destination Host Unreachable". This is especially odd as another computer or my smart phone stays connected at the same time to the same network and seems to notice nothing. So I have a problem independend of wifi or lan drivers but with something else. Where should I look for logs that could tell me more information? I can't find some in syslog. Just if I reconnect to the router (wich works for another few moments) there are a lot of informations. But everythin is okay then.

    Read the article

  • Wifi stops working in 13.10

    - by Vitor
    OK, my wifi is connecting fine, but it just stop downloading and uploading data. The skype stops, trying to reconnect, the firefox nighly and the google chromium stops loading pages and websites, everythings stops. But when I see the network icon: connected to my wifi. Then, I simply reconnect, or disconnect and connect again. Reconnecting, the wifi starts working again, the skype icon turns green, the browsers work again. Previously, I had 13.04 and never had this problem. When I upgraded to Ubuntu 13.10, the wifi started to do this. And the wifi has this problem since the early times of 13.10; since I installed 13.10, the wifi is having this issue (from the first day I installed it). Anyone having the same problem? Anyone knows how to fix it?

    Read the article

  • Where would you start if you were trying to solve this PDF classification problem?

    - by burtonic
    We are crawling and downloading lots of companies' PDFs and trying to pick out the ones that are Annual Reports. Such reports can be downloaded from most companies' investor-relations pages. The PDFs are scanned and the database is populated with, among other things, the: Title Contents (full text) Page count Word count Orientation First line Using this data we are checking for the obvious phrases such as: Annual report Financial statement Quarterly report Interim report Then recording the frequency of these phrases and others. So far we have around 350,000 PDFs to scan and a training set of 4,000 documents that have been manually classified as either a report or not. We are experimenting with a number of different approaches including Bayesian classifiers and weighting the different factors available. We are building the classifier in Ruby. My question is: if you were thinking about this problem, where would you start?

    Read the article

  • Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0!

    Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0! Ask and vote for questions at: bit.ly mod_pagespeed is an open-source Apache module that automatically optimizes web pages and resources on them: images, CSS, JavaScript, and much more. In this episode, we'll catch up with Joshua Marantz, the tech lead of the project at Google and talk about the history of mod_pagespeed, its fast growing adoption (130K+ sites!), technical architecture and how it works under the hood. Finally, we'll talk about the upcoming 1.0 release milestone for the project. If you're curious about mod_pagespeed, then this is definitely the show you won't want to miss! From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Length of Page Title, URL, Meta Description and total number of links on a page

    - by MJWadmin
    We've been examining a number of different SEO tools recently. Several of these tell us that some of our page title's, urls and meta descriptions are too long. We've also been told that some of our pages have too many links on them. I guess our first question is - is any of that feedback true! Can URL's etc actually be too long and if so how much does this affect ranking? Secondly can you have too many links on a page and if so, how many is too many? Thanks in advance...

    Read the article

  • How to CURL and avoid timeout death (Twitter Down) [migrated]

    - by David
    Twitter is down right now, and one of my site's home pages relies on getting data from Twitter (relies is the problem - it should be more of an accessory feature, as it just shows follow count from its feed). Here's the code in question: function socials_Twitter_GetFollowerCount($username) { $method = function () use ($username) { return file_get_contents('https://api.twitter.com/1/users/show.json?screen_name='.$username.'&include_entities=true'); }; $json = cache('bmdtwitter', 3600, $method, false); $json = json_decode($json, true); return intval($json['followers_count']); } What is a good way to make it so if Twitter is down (or not responsive for some reasonable amount of time), our site doesn't appear to be down (I think the timeout maybe defaulting to 30-60 seconds or more).

    Read the article

  • Issue with wired network

    - by Shiju Nambiar
    I have ubuntu 12.10 (downgraded from 13.04 when I faced this issue). I am a newbee and don't know much of technical side like commandline on the terminal. I have a wired internet connection and it was working just fine for last few months since I installed ubuntu. my OS got upgraded to 13.04 and it was still fine. Since last 3 days, network shows connected, connection stays for few seconds and then websites stop loading. Connection shows as connected, but pages don't load. I have to reconnect again to access a website and the connection goes off again in say about 10 to 15 seconds. So I use my internet by connecting and disconnecting every few seconds.

    Read the article

  • Getting a lot of '/_' errors from webmaster tools

    - by Vermino
    I'm using a WordPress site and I thought I got all the kinks out of it. For some reason Webmaster Tools is crawling my website and showing a lot of 404 errors which are from /_ like additional pages that I've never created. I just can't figure out what is creating these for Google crawlers and then displaying a 404. My robots.txt is here. My sitemap (created by the Yoast plugin) is here. I have Yoast and Jetpack plugins installed. What could be causing these links to appear

    Read the article

  • How do I add restrictions for users to sign up before they can access web site?

    - by user1867842
    How do I get my webpage not to go back when they hit the back button and are logged out and how can I add a web page to be blocked like FACEBOOK doesn't let you get into their site with out having a page or a account with them, and if you try to put something in the url and try to go to something on their site it gives you a web page that says "you have to be logged in first" . Like I don't want someone going to the url of the "index" page before they have signed up as a member they need to make an account first then they can have access to the "index" page. How do I do this. I have a website so far that has a database and the website has 5 pages so far and two of them which is the login and sign up page which are both used with php and mysql and they work fine. How do I restrict access to the main website by first having the users sign up with me for an account.

    Read the article

  • Are the contents in the front page considered as duplicate of the post?

    - by yibe
    I asked this same question on stackoverflow, but closed being off topic. Therefore, I am posting it here. In Wordpress blogs, the front page of the blog will display many posts in whole or excerpts. When the link to the post is clicked, the content will be opened with an other template file(single.php). Can we say that the content displayed in the front page and the post pages are considered as duplicate? Does it harm SEO in any way?

    Read the article

  • Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0!

    Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0! mod_pagespeed is an open-source Apache module that automatically optimizes web pages and resources on them: images, CSS, JavaScript, and much more. In this episode, we'll catch up with Joshua Marantz, the tech lead of the project at Google and talk about the history of mod_pagespeed, its fast growing adoption (130K+ sites!), technical architecture and how it works under the hood. Finally, we'll talk about the upcoming 1.0 release milestone for the project. If you're curious about mod_pagespeed, then this is definitely the show you won't want to miss! From: GoogleDevelopers Views: 2 0 ratings Time: 01:05:06 More in Science & Technology

    Read the article

  • Microformats, Reviews and Duplicate Content

    - by Nicholas
    Let's say I have a site that sells widgets, and the URL structure is like so: /[type-of-widget]/[sub-type]/[widget-name]/ So, a URL for a widget might be: /screwdrivers/philips-screwdrivers/acme-big-screwdriver/ We show reviews on the widget page, and use the appropriate microformat data so Google knows it's a review, etc. Now, what if I want to show random reviews in the "sub-type" and "type-of-widget" landing pages? Will Google ding me for duplicate content, or is it smart enough to know (based on microformat data/etc.) that this is not duplicate content?

    Read the article

  • Where is EasUS coming from?

    - by Malcolm Lawrie
    I have downloaded the Universal USB Installer and Ubuntu 12.04 Desktop as described on your site. I installed it to a 16Gb USB stick including the format option. Now when I try to boot from the stick into Ubuntu I get a couple of lines of script then a screen with EasUS Todo Backup with Backup. Recovery, Clone and Tools options, but no sign of Ubuntu starting. Where is the start Ubuntu option please? I can find no reference to EaseUS on your help pages.

    Read the article

  • Find out when a new domain appears in search results

    - by TerryB
    Does anyone know a way to perform the following: I want to know whenever a new domain starts appearing in the google search results for a particular query. For a given google search query, I'd like to receive an alert whenever a new domain pops up and starts appearing in the search results for that query. Alternatively, it would be great if you could just sort google search results by the age of the domain, making it easy to find new sites. As far as I can tell you can only sort by when the page was "last updated". Is something like this possible? EDIT: Following John's suggestion of Google Alerts. The problem with Google Alerts is that it sends you any new PAGES appearing in the search results, not just new DOMAINS.

    Read the article

  • Sortie officielle de WebMatrix, le nouvel outil de développement Web de Microsoft pour les débutants ou les petites entreprises

    Sortie officielle de WebMatrix Nouvel outil de développement web de Microsoft pour les débutants ou les petites entreprises Mise à jour du 14/01/11 Comme nous l'avions prévu hier (lire ci-dessous) WebMatrix, le nouvel IDE de Microsoft est sorti. WebMatrix est un outil tout-en-un destiné à tous les développeurs mais particulièrement aux étudiants, ou ou aux personnes cherchant un moyen simple et rapide de créer un site Web. Il comprend IIS Express (serveur de développement web), ASP.NET Web Pages (technologie de développement web), et SQL Server Compact (base de données embarquée). « WebMatrix démocratise la plateforme web en...

    Read the article

  • Alternative Web model

    - by Above The Gods
    One of the problems web apps have against native apps, especially on the mobile front, is the constant need to re-download each web page on request. Ultimately, this leads to slower performance. Why if web apps only download new pages if they're actually needed, not because they're simply requested. For example: perhaps the server can store a web page version in a cookie. Every slight change to the page on the server-side changes the version number. Now instead of the browser requesting a new page each time, why not just check the version number and have the server send the page if they're different? If the page similar, the user can just use a cached page. I'm sure browsers doesn't necessarily have to change to accommodate changes to this, correct?

    Read the article

  • Dual Screens not working nVidia

    - by user91396
    So I'm very much an Ubuntu noob, in fact I just install Ubuntu to my P.C. and I started it up with both my screens plugged into my nVidia's dvi and vga ports and logged in, change the skin to classic gnome, because that's how it was when I last used Ubuntu (8.1), and both screens were working separately. The trouble is that I got a notification saying there was nVidia drivers to be installed, so I install them and restart my P.C., as it told me to, and when I get back on only one of my screens is working and when I go into Displays (All Settings, Displays) it doesn't register my other screen at all, and it calls my working screen "Laptop". I've tried looking through several pages of Google but I see no answer. I did try to find the nvidia-settings to see if that had the answers but sadly I couldn't locate it. Thanks in advance for any help, but please remember, I am very new to Ubuntu.

    Read the article

  • Will keep google traffic on new site from old site when moving content from old site? [closed]

    - by user1324762
    Possible Duplicate: new domain, old links are 301’d from old domain to new, how will this affect my rankings? I have a site about bikers. Now I created a dating site for bikers. I don't need old site any more, I want to move all articles to this new dating site. So basically, this is not only moving content to new domain, but also to entire new site. What I am planning to do is to make 301 redirect for all 200 articles. For pages that are not articles, I will just put message that the site will be down soon. Do you think that I will get all google traffic from old site from those articles? Is there anything I should be aware and careful?

    Read the article

  • wifi works only after connecting through wire

    - by orustam
    I have fresh installed ubuntu 12.04. it is my first ubuntu installation and i'm a bit confused about the network connection. Wifi shows up and connects(at least it shows that the connection is established), but i can't open any pages, i've tried to ping some sites and it fails either. If i try to connect through a wire it works, what is interesting to me is that after i used my wire connection i can use my wifi properly without a wire pluged in. i think it probably has to do with my settings? I tried to find a solution but can figure it out on my own. My Proxy set to none(have applied it system wide) Please help me if you have any clue:)

    Read the article

  • Request of some opinions about a vertical menu style and some suggestions for the site style [on hold]

    - by AndreaNobili
    I am developing a simple mainly static website using WordPress (because maybe in the future I will add some dynamic content) for a company. The new site have to follow the structure of the old site that requires the presence of a vertical main menu in the left column that contains the link to all the statics pages in the site. This is the old site structure: http://www.saranistri.com/ Now I have installed a new WordPress test site (this is only a test site): http://onofri.org/example/ As you can see in the left columns I have put two main menu vetical widgets that implements a possible choise for the maun menù (the top menù upon the header must be eliminated in the final implementation) I want to know some opinions about: 1) Which of the two version is better? Do you have some additional ideas about the CSS style of this vertical menu? 2) What could I do to give a more professional look to this site? (I know that I have to insert a logo into the header) Tnx Andrea

    Read the article

  • Common light map practices

    - by M. Utku ALTINKAYA
    My scene consists of individual meshes. At the moment each mesh has its associated light map texture, I was able to implement the light mapping using these many small textures. 1) Of course, I want to create an atlas, but how do you split atlases to pages, I mean do you group the lm's of objects that are close to each other, and load light maps on the fly if scene is expected to be big. 2) the 3d authoring software provides automatic uv coordinates for each mesh in the scene, but there are empty areas in the texel space, so if I scale the texture polygons the texel density of each face wil not match other meshes, if I create atlas like that there will be varying lm resolution, how do you solve this, just leave it as it is, or ignore resolution ? Actually these questions also applies to other non tiled maps.

    Read the article

  • TechEd 2014 Day 4

    - by John Paul Cook
    Many people visiting the SQL Server booth wanted to know how to improve performance. With so much attention being given to COLUMNSTORE and in-memory tables and stored procedures, it is easy to overlook how important tempdb is to performance. Speeding up tempdb I/O improves performance. The best way to do this is to not do the I/O in the first place. With SQL Server 2014, tempdb page management is smarter. Pages are more likely to be released before being unnecessarily flushed to disk. Read more about...(read more)

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >