Search Results

Search found 9935 results on 398 pages for 'pages'.

Page 176/398 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Wifi stops working in 13.10

    - by Vitor
    OK, my wifi is connecting fine, but it just stop downloading and uploading data. The skype stops, trying to reconnect, the firefox nighly and the google chromium stops loading pages and websites, everythings stops. But when I see the network icon: connected to my wifi. Then, I simply reconnect, or disconnect and connect again. Reconnecting, the wifi starts working again, the skype icon turns green, the browsers work again. Previously, I had 13.04 and never had this problem. When I upgraded to Ubuntu 13.10, the wifi started to do this. And the wifi has this problem since the early times of 13.10; since I installed 13.10, the wifi is having this issue (from the first day I installed it). Anyone having the same problem? Anyone knows how to fix it?

    Read the article

  • What is required to create local business rich-snippets complete with sitelinks AND breadcrumbs?

    - by Felix
    I have a local business directory site. I would like to markup my business listing 'profile' level pages for display as enhanced listings/rich-snippets complete with business names, addresses and phone numbers. I would also like to display site-links and path-based breadcrumbs to help users navigate site directory hierarchy (which is deep). Is there a limit to the amount of breadcrumbs a site can leave? Is there a separate limit on the number of breadcrumbs which Google/Bing will display in the SERP? What kind of markup language(s) would be needed to best position my site to show site-links AND breadcrumbs? For example: Find a business Browse by Location State City Zip or Find a business Choose Service Browse by location State City Thanks all!

    Read the article

  • htaccess 301 redirect help needed

    - by John
    Due to some issues in my site many pages are visible as duplicate using : www.example.com/page.html?task=view but it's content is exactly same as www.example.com/page.html. One way is to use http 301 redirect from www.example.com/page.html?task=view to www.example.com/page.html when anybody fetches page with arguments. But links like www.example.com/page.html?task=view will remain visible to outside world. Another way is canonicalization which I don't want to use as it is difficult to insert the tag in Joomla CMS. I want to hide www.example.com/page.html?task=view from external world. Is it possible to change the url from www.example.com/page.html?task=view to www.example.com/page.html ? I mean if there is href link of www.example.com/page.html?task=view in my web page, it should be visible to external world as without any arguments. This is different from using 301 to convert externally accessed page : www.example.com/page.html?task=view to without using arguments in .htaccess.

    Read the article

  • Flowmotion Running on Ubuntu Server [migrated]

    - by Thomas Egan
    I am trying to configure Flowmotion to work on my Ubuntu Server. At present I use LAMP to serve pages from a Virtualbox installation. I will be moving this to a dedicated server but would like to enable true streaming of videos using this installation. I am only interested in open source streaming for a research project and although I have installed Flowmotion via apt-get I don't know how to start the service so that embedded videos located on the server will stream. Can anybody provide any information regarding this or online resources I may have missed? I have checked the documentation however if appears far too complex. Just clarify I'm running VirtualBox 4.2.1 on Mac OSX 10.6.8 and Ubuntu Server 12.06 64-bit

    Read the article

  • How do I add restrictions for users to sign up before they can access web site?

    - by user1867842
    How do I get my webpage not to go back when they hit the back button and are logged out and how can I add a web page to be blocked like FACEBOOK doesn't let you get into their site with out having a page or a account with them, and if you try to put something in the url and try to go to something on their site it gives you a web page that says "you have to be logged in first" . Like I don't want someone going to the url of the "index" page before they have signed up as a member they need to make an account first then they can have access to the "index" page. How do I do this. I have a website so far that has a database and the website has 5 pages so far and two of them which is the login and sign up page which are both used with php and mysql and they work fine. How do I restrict access to the main website by first having the users sign up with me for an account.

    Read the article

  • wifi works only after connecting through wire

    - by orustam
    I have fresh installed ubuntu 12.04. it is my first ubuntu installation and i'm a bit confused about the network connection. Wifi shows up and connects(at least it shows that the connection is established), but i can't open any pages, i've tried to ping some sites and it fails either. If i try to connect through a wire it works, what is interesting to me is that after i used my wire connection i can use my wifi properly without a wire pluged in. i think it probably has to do with my settings? I tried to find a solution but can figure it out on my own. My Proxy set to none(have applied it system wide) Please help me if you have any clue:)

    Read the article

  • Microformats, Reviews and Duplicate Content

    - by Nicholas
    Let's say I have a site that sells widgets, and the URL structure is like so: /[type-of-widget]/[sub-type]/[widget-name]/ So, a URL for a widget might be: /screwdrivers/philips-screwdrivers/acme-big-screwdriver/ We show reviews on the widget page, and use the appropriate microformat data so Google knows it's a review, etc. Now, what if I want to show random reviews in the "sub-type" and "type-of-widget" landing pages? Will Google ding me for duplicate content, or is it smart enough to know (based on microformat data/etc.) that this is not duplicate content?

    Read the article

  • Joomla url issue with sh404SEF

    - by user5858
    it's couple of months I've been using SH404SEF With my site. But in my site I'm getting url's in the form: http://www.downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html?task=view If I remove this suffix(?task=view), it takes us to the same page. I had raised this issue in SH404SEF forum, and I was told that this data is taken as parameter by search engines hence ignored. I want to redirect using RewriteMatch in .htaccess all such url's to the url's without ?task=view ones : ....downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html?task=view to be redirected to http://www.downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html So my question is: Will this redirection create 404's in the Google webmaster. I've thousand's of pages in the site

    Read the article

  • Will keep google traffic on new site from old site when moving content from old site? [closed]

    - by user1324762
    Possible Duplicate: new domain, old links are 301’d from old domain to new, how will this affect my rankings? I have a site about bikers. Now I created a dating site for bikers. I don't need old site any more, I want to move all articles to this new dating site. So basically, this is not only moving content to new domain, but also to entire new site. What I am planning to do is to make 301 redirect for all 200 articles. For pages that are not articles, I will just put message that the site will be down soon. Do you think that I will get all google traffic from old site from those articles? Is there anything I should be aware and careful?

    Read the article

  • Website with over 1 million posts with not much textual content

    - by Far Se
    I've made a website which crawls files from all over the Internet and I feel like Google will ban me if I sent it sitemaps which contain all of these pages (1m+), because they contain only the file name/size/no of downloads and the download link(s). I'm considering this thought because I've made another website like this in the past and Google banned me after one week with the reason: "spam", even it was not (maybe somebody falsely reported me?!). Does someone have an idea about how to keep Google form banning my website? I've seen several other sites like mine and they don't get banned or... anything. And also, should I sent the sitemap or wait until Google indexes every page as it finds them? Thanks in advance :)

    Read the article

  • Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0!

    Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0! Ask and vote for questions at: bit.ly mod_pagespeed is an open-source Apache module that automatically optimizes web pages and resources on them: images, CSS, JavaScript, and much more. In this episode, we'll catch up with Joshua Marantz, the tech lead of the project at Google and talk about the history of mod_pagespeed, its fast growing adoption (130K+ sites!), technical architecture and how it works under the hood. Finally, we'll talk about the upcoming 1.0 release milestone for the project. If you're curious about mod_pagespeed, then this is definitely the show you won't want to miss! From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Tricky mod_rewrite challenge

    - by And Finally
    I list about 9,000 records on my little site. At the moment I'm showing them with a dynamic page, like http://domain.com/records.php?id=019031 But I'd like to start using meaningful URLs like this one on Amazon http://www.amazon.co.uk/Library-Mythology-Oxford-Worlds-Classics/dp/0199536325 where the title string on the root level gets ignored and requests are redirected to the records.php page, which accepts the ID as usual. Does anybody know how I could achieve that with mod_rewrite? I'm wondering how I'd deal with requests to my other root-level pages, like http://domain.com/contact.php, that I don't want to redirect to the records page.

    Read the article

  • Tracking a single page on another domain in Google Analytics

    - by Ross
    I have access to edit a 'mini-site' hosted on our organisation's parent site. I'd like to track this page using Google Analytics, however I don't have access to the front page so I can't verify this as my domain. Using the tracking code for our main site works, however I don't want this data to be confused with similarly named pages on our site (for example, our mini-site is at /radio, and if we had a /radio at our main site this would be counted as the same). Has anyone been in this situation before? I'd like to just redirect visitors to our mini-site to our main site, seeing as it ranks higher in Google, but I've been told to maintain a separate site with our main features.

    Read the article

  • Dual Screens not working nVidia

    - by user91396
    So I'm very much an Ubuntu noob, in fact I just install Ubuntu to my P.C. and I started it up with both my screens plugged into my nVidia's dvi and vga ports and logged in, change the skin to classic gnome, because that's how it was when I last used Ubuntu (8.1), and both screens were working separately. The trouble is that I got a notification saying there was nVidia drivers to be installed, so I install them and restart my P.C., as it told me to, and when I get back on only one of my screens is working and when I go into Displays (All Settings, Displays) it doesn't register my other screen at all, and it calls my working screen "Laptop". I've tried looking through several pages of Google but I see no answer. I did try to find the nvidia-settings to see if that had the answers but sadly I couldn't locate it. Thanks in advance for any help, but please remember, I am very new to Ubuntu.

    Read the article

  • Convert Microsoft Word documents (.doc/x) into HTML files

    - by danie7L T
    Does anybody knows of a good application to get it done quickly and efficiently ? I bought Word Cleaner but the results are merely sufficient and I need go over all the generated html files to clean tons of useless injected tags like <strongH</strong<strongell</strong<strongo </strong<emWor</em<emld</em Most of the articles displayed on a website I manage are based on documents written on MS Word by people how has little idea of what are margins for or ordered/unordered lists, foot/end notes etc and I cannot make them use something else. Does anyone has a tip to help me handle those pages more efficiently than going over them to correct and apply my CSS style ? NB: Just for the record, using "Save as HTML DOC" in Word is faaar worst than Word cleaner

    Read the article

  • Sortie officielle de WebMatrix, le nouvel outil de développement Web de Microsoft pour les débutants ou les petites entreprises

    Sortie officielle de WebMatrix Nouvel outil de développement web de Microsoft pour les débutants ou les petites entreprises Mise à jour du 14/01/11 Comme nous l'avions prévu hier (lire ci-dessous) WebMatrix, le nouvel IDE de Microsoft est sorti. WebMatrix est un outil tout-en-un destiné à tous les développeurs mais particulièrement aux étudiants, ou ou aux personnes cherchant un moyen simple et rapide de créer un site Web. Il comprend IIS Express (serveur de développement web), ASP.NET Web Pages (technologie de développement web), et SQL Server Compact (base de données embarquée). « WebMatrix démocratise la plateforme web en...

    Read the article

  • How to CURL and avoid timeout death (Twitter Down) [migrated]

    - by David
    Twitter is down right now, and one of my site's home pages relies on getting data from Twitter (relies is the problem - it should be more of an accessory feature, as it just shows follow count from its feed). Here's the code in question: function socials_Twitter_GetFollowerCount($username) { $method = function () use ($username) { return file_get_contents('https://api.twitter.com/1/users/show.json?screen_name='.$username.'&include_entities=true'); }; $json = cache('bmdtwitter', 3600, $method, false); $json = json_decode($json, true); return intval($json['followers_count']); } What is a good way to make it so if Twitter is down (or not responsive for some reasonable amount of time), our site doesn't appear to be down (I think the timeout maybe defaulting to 30-60 seconds or more).

    Read the article

  • Where would you start if you were trying to solve this PDF classification problem?

    - by burtonic
    We are crawling and downloading lots of companies' PDFs and trying to pick out the ones that are Annual Reports. Such reports can be downloaded from most companies' investor-relations pages. The PDFs are scanned and the database is populated with, among other things, the: Title Contents (full text) Page count Word count Orientation First line Using this data we are checking for the obvious phrases such as: Annual report Financial statement Quarterly report Interim report Then recording the frequency of these phrases and others. So far we have around 350,000 PDFs to scan and a training set of 4,000 documents that have been manually classified as either a report or not. We are experimenting with a number of different approaches including Bayesian classifiers and weighting the different factors available. We are building the classifier in Ruby. My question is: if you were thinking about this problem, where would you start?

    Read the article

  • Chrome window freezes in Ubuntu

    - by Dragon5689
    Sometimes, especially when I open pages that have some kind of multimedia contents, Chrome freezes. It always happens directly after opening a new tab. In contrast to the way Chrome usually has only tabs crashing, the entire windows freezes. If I have multiple separate Chrome windows open, the others keep working. I run Ubuntu 12.04 and Chrome in version 20.0.1132.47 but this has been going on since I last set up my machine around half a year ago. Anyone having the same problems or an idea what could be wrong here?

    Read the article

  • Mouse cursor is MASSIVE inside of firefox and chromium

    - by user171396
    While installing ubuntu i accidently hit the high contrast option. I could not figure out how to diable it within the install, so i let it complete. I booted up into ubuntu 13.04 and HC was still on. I disabled it in Universal Access, and now am noticing my mouse curose is huge in web browsers. This is very much a stock install. Is there a setting to disable the HUGE mouse? I mean the thing is 4 times the size of text etc on normal pages. and its only in broswsers from what ive seen so far. EDIT Looks like its in everthing with text.. terminal, app store, folders and files... /sigh.

    Read the article

  • Is doing AB Tests using site redirection a bad practice?

    - by user40358
    I'm developing hotels websites here in Brazil. When the site is done, we do an AB test with the old version to measure conversion and show to the hotel owner how good our site is. Due to the fact that I cannot put the old site inside the new one as a subresource (newone.com/old), currently I'm doing those AB test as follows: 1) I create 2 Google Analytics accounts, one for each site (old and new); 2) I put the GA tags in the old website pages (changing its possibly existent GA ID to the just created one); 3) I put an Javascript code that redirects the user to the old website (in a different URL and different domain) with 50% of probability. So I compare all the metrics, events and goals between those two GA accounts. How bad is it? How Google can interpretate the fact of being, sometimes redirected, sometimes don't? The experiment usually runs for 2 weeks. Is there any other alternative for doing this in a better way?

    Read the article

  • Request of some opinions about a vertical menu style and some suggestions for the site style [on hold]

    - by AndreaNobili
    I am developing a simple mainly static website using WordPress (because maybe in the future I will add some dynamic content) for a company. The new site have to follow the structure of the old site that requires the presence of a vertical main menu in the left column that contains the link to all the statics pages in the site. This is the old site structure: http://www.saranistri.com/ Now I have installed a new WordPress test site (this is only a test site): http://onofri.org/example/ As you can see in the left columns I have put two main menu vetical widgets that implements a possible choise for the maun menù (the top menù upon the header must be eliminated in the final implementation) I want to know some opinions about: 1) Which of the two version is better? Do you have some additional ideas about the CSS style of this vertical menu? 2) What could I do to give a more professional look to this site? (I know that I have to insert a logo into the header) Tnx Andrea

    Read the article

  • La free software foundation présente sa liste des projets open source prioritaires et invite les utilisateurs à contribuer à l'essor du libre

    Le libre a besoin de vous ! La free software foundation invite les utilisateurs à contribuer aux projets libresLe logiciel libre. On l'aime, on l'utilise, on le partage, ou encore on le commercialise. Cependant combien sont ceux qui pensent un temps soit peu à supporter le logiciel libre ? Combien évitent souvent le bouton « Donate » des pages officielles des projets comme « The GIMP » ? Combien sont-ils à consacrer un peu de leur temps pour la promotion du logiciel libre ? Dans la mentalité de...

    Read the article

  • Sense of "stop on..." stanza when job is a task

    - by Binarus
    Hi, an upstart question (I think I have read all relevant man pages but could not find the answer there): What is the sense of using a "stop on ..." stanza in the definition of a job which is a task? The manuals tell us that such a job, after being started, just waits until its script (or exec stanza) is executed completely, and then stops automatically. Given that, what is the point in using "stop on ..." stanzas in such job definitions? For example, this is the job definition for Upstart's (very important) rc job in Natty 11.04 (leaving out comments and empty lines): start on runlevel [0123456] stop on runlevel [!$RUNLEVEL] export RUNLEVEL export PREVLEVEL console output env INIT_VERBOSE task exec /etc/init.d/rc $RUNLEVEL IMHO, the job, after being started by a runlevel event, will be stopped automatically as soon as /etc/init.d/rc $RUNLEVEL has finished. Thank you very much for any explanation!

    Read the article

  • What&rsquo;s new in ASP.NET MVC 2 Wrox Blox available for purchase

    My latest book, the Whats new in ASP.NET MVC 2 Wrox Blox, is now available for purchase from the Wrox store at the cost of US $6.99. For those who are not familiar with them, Wrox Blox are short and concise ebooks that cover very specific topics. Ranging from 30 to 70-80 pages, they are a very good option in case you need to solve a specific problem, or learn a specific technology, but you dont to buy a whole book only when you would read a chapter or two. And this ebook is exactly like that:...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >