Search Results

Search found 14037 results on 562 pages for 'alter index'.

Page 298/562 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Do wordpress websites get indexed quicker by SE than a regular website?

    - by guisasso
    I registered a couple of domains with the names of categories of products we sell. I then installed wordpress in one of those domains and played around with it for a bit, and left it alone for about a month. There was a link on my regular website to that secondary website and that website was also registered in google's webmaster tools, but that's that. I then searched on google last week for that product category, and to my surprise, that secondary website showed up in the 2nd or 3rd page on google. Now my question is: Do search engines index wordpress websites quicker? I had given up on using wordpress for that website, since it's so simple, but should i use it, would it give me better results? Thanks in advance for the help, if the question is not deleted.

    Read the article

  • "Failed to fetch" while updating

    - by Farouk BA
    I'm trying to update from ubuntu 12.10 lately but I keep getting the "Failed to fetch" error. W: Failed to fetch ht tp://security.ubuntu.com/ubuntu/dists/quantal-security/Release Unable to find expected entry 'independent/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file) W: Failed to fetch ht tp://archive.ubuntu.com/ubuntu/dists/quantal/Release Unable to find expected entry 'independent/source/Sources' in Release file (Wrong sources.list entry or malformed file) W: Failed to fetch ht tp://archive.ubuntu.com/ubuntu/dists/quantal-updates/Release Unable to find expected entry 'independent/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file) W: Failed to fetch ht tp://archive.ubuntu.com/ubuntu/dists/quantal-backports/Release Unable to find expected entry 'independent/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file) E: Some index files failed to download. They have been ignored, or old ones used instead. I changed the server and deleted the source lists from /var/lib/apt/lists/ like some answers say but still. This is really annoiying.

    Read the article

  • is it ok to have 2 sitemaps on 1 website?

    - by user615041
    Do I have to have a sitemap page on my index page for bots to read it or can I just have it anywhere on my server? I have a phpbb/wordpress integration and I need 2 sitemaps mods for each one (or I need to have them somehow integrated together into one xml sitemap). Is this possible? Whats my best option? I would have the phpbb one something like this: http://www.example.com/phpbb/sitemap.html and the wordpress one something like this: http://www.example.com/wordpress/sitemap.html and then I would submit both off..but not have the links on my footer to confuse anyone.., the sitemaps would strictly be for search engines. Is this a good idea? what are you thoughts?

    Read the article

  • 12.04 sound keeps auto-muting when idle

    - by fali
    I just installed 12.04 on an HP8510W. Everything works fine except for one weird behavior which I have noticed. When ever there is no audio playing, the audio mute indicator on the laptop is on. As soon as I start playing a you tube video the mute indicator turns off and I get sound. Here is my pulse audio output which says that the sink is suspended because it is idle: Welcome to PulseAudio! Use "help" for usage information. list-sinks 1 sink(s) available. index: 0 name: <alsa_output.pci-0000_00_1b.0.analog-stereo> driver: <module-alsa-card.c> flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY DYNAMIC_LATENCY state: SUSPENDED suspend cause: IDLE I tried running alsamixer, but I don't see the auto-mute option.

    Read the article

  • Google Analytics - bad experiences? (esp. adult content)

    - by Litso
    Hello all, I work for a rather large adult website, and we're currently not using Google Analytics. There is an internal debate going on about whether we should start using Analytics, but there is hestitation from certain parties. The main argument is that they fear that Google will get too much insight into our website, and might even block us from the index as a result based on our adult content. Has anyone here ever had such an experience, or know stories about bad experiences with Google Analytics in such a manner? I personally think it will only improve our website if we were able to use Analytics, but the dev team was asked to look into possible negative effects. Any help would be appreciated.

    Read the article

  • 11.10 - Update Manager Not working

    - by Mattlinux1
    W:Failed to fetch cdrom://Ubuntu 11.10 Oneiric Ocelot - Release i386 (20111012)/dists/oneiric/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 11.10 Oneiric Ocelot - Release i386 (20111012)/dists/oneiric/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , E:Some index files failed to download. They have been ignored, or old ones used instead. This happens when i hit the check button? and the updates were working before.

    Read the article

  • Redirect packages directed to port 5000 to another port

    - by tdc
    I'm trying to use eboard to connect to the FICS servers (http://www.freechess.org), but it fails because port 5000 is blocked (company firewall). However, I can connect to the server through the telnet port (23): telnet freechess.org 23 (succeeds) telnet freechess.org 5000 (fails) Unfortunately the port number is hardcoded (see here: http://ubuntuforums.org/archive/index.php/t-1613075.html). I'd rather not have to hack the source code as the author of that thread ended up doing. Can I just forward the port on my local machine using iptables? I tried: sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 5000 -j REDIRECT --to-port 23 and sudo iptables -t nat -I OUTPUT --src 0/0 -p tcp --dport 5000 -j REDIRECT --to-ports 23 but these didn't work... Note that: $ sudo iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:5000 redir ports 23 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:5000 redir ports 23 Chain POSTROUTING (policy ACCEPT) target prot opt source destination

    Read the article

  • How to cleanly add after-the-fact commits from the same feature into git tree

    - by Dennis
    I am one of two developers on a system. I make most of the commits at this time period. My current git workflow is as such: there is master branch only (no develop/release) I make a new branch when I want to do a feature, do lots of commits, and then when I'm done, I merge that branch back into master, and usually push it to remote. ...except, I am usually not done. I often come back to alter one thing or another and every time I think it is done, but it can be 3-4 commits before I am really done and move onto something else. Problem The problem I have now is that .. my feature branch tree is merged and pushed into master and remote master, and then I realize that I am not really done with that feature, as in I have finishing touches I want to add, where finishing touches may be cosmetic only, or may be significant, but they still belong to that one feature I just worked on. What I do now Currently, when I have extra after-the-fact commits like this, I solve this problem by rolling back my merge, and re-merging my feature branch into master with my new commits, and I do that so that git tree looks clean. One clean feature branch branched out of master and merged back into it. I then push --force my changes to origin, since my origin doesn't see much traffic at the moment, so I can almost count that things will be safe, or I can even talk to other dev if I have to coordinate. But I know it is not a good way to do this in general, as it rewrites what others may have already pulled, causing potential issues. And it did happen even with my dev, where git had to do an extra weird merge when our trees diverged. Other ways to solve this which I deem to be not so great Next best way is to just make those extra commits to the master branch directly, be it fast-forward merge, or not. It doesn't make the tree look as pretty as in my current way I'm solving this, but then it's not rewriting history. Yet another way is to wait. Maybe wait 24 hours and not push things to origin. That way I can rewrite things as I see fit. The con of this approach is time wasted waiting, when people may be waiting for a fix now. Yet another way is to make a "new" feature branch every time I realize I need to fix something extra. I may end up with things like feature-branch feature-branch-html-fix, feature-branch-checkbox-fix, and so on, kind of polluting the git tree somewhat. Is there a way to manage what I am trying to do without the drawbacks I described? I'm going for clean-looking history here, but maybe I need to drop this goal, if technically it is not a possibility.

    Read the article

  • not getting updates

    - by gknarayana
    when i check for updates the message is "W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release i386 (20120423)/dists/precise/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release i386 (20120423)/dists/precise/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 11.10 _Oneiric Ocelot_ - Release i386 (20111012)/dists/oneiric/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 11.10 _Oneiric Ocelot_ - Release i386 (20111012)/dists/oneiric/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , E:Some index files failed to download. They have been ignored, or old ones used instead." please suggest what i should do

    Read the article

  • duplicate pages

    - by Mert
    I did a small coding mistake and google indexed my site wrongly. this is correct form: https://www.foo.com/urunler/171/TENGA-CUP-DOUBLE-HOLE but google index my site like this : https://www.foo.com/urunler/171/cart.aspx first I fixed the problem and made a site map and only correct link in it. now I checked webmaster tools and I see this; Total indexed 513 Not selected 544 Blocked by robots 0 so I think this can be caused by double indexes and they looks not selected makes my data not selected. I want to know how to fix this "https://www.foo.com/urunler/171/cart.aspx" links. should I fix in code or should I connect to google to reindex my site. If I should redirect wrong/duplicate links to correct ones, what the way should be? thanks for your time in advance.

    Read the article

  • Capture a Query Executed By An Application Or User Against a SQL Server Database in Less Than a Minute

    - by Compudicted
    At times a Database Administrator, or even a developer is required to wear a spy’s hat. This necessity oftentimes is dictated by a need to take a glimpse into a black-box application for reasons varying from a performance issue to an unauthorized access to data or resources, or as in my most recent case, a closed source custom application that was abandoned by a deserted contractor without source code. It may not be news or unknown to most IT people that SQL Server has always provided means of back-door access to everything connecting to its database. This indispensible tool is SQL Server Profiler. This “gem” is always quietly sitting in the Start – Programs – SQL Server <product version> – Performance Tools folder (yes, it is for performance analysis mostly, but not limited to) ready to help you! So, to the action, let’s start it up. Once ready click on the File – New Trace button, or using Ctrl-N with your keyboard. The standard connection dialog you have seen in SSMS comes up where you connect the standard way: One side note here, you will be able to connect only if your account belongs to the sysadmin or alter trace fixed server role. Upon a successful connection you must be able to see this initial dialog: At this stage I will give a hint: you will have a wide variety of predefined templates: But to shorten your time to results you would need to opt for using the TSQL_Grouped template. Now you need to set it up. In some cases, you will know the principal’s login name (account) that needs to be monitored in advance, and in some (like in mine), you will not. But it is VERY helpful to monitor just a particular account to minimize the amount of results returned. So if you know it you can already go to the Event Section tab, then click the Column Filters button which would bring a dialog below where you key in the account being monitored without any mask (or whildcard):  If you do not know the principal name then you will need to poke around and look around for things like a config file where (typically!) the connection string is fully exposed. That was the case in my situation, an application had an app.config (XML) file with the connection string in it not encrypted: This made my endeavor very easy. So after I entered the account to monitor I clicked on Run button and also started my black-box application. Voilà, in a under a minute of time I had the SQL statement captured:

    Read the article

  • How to recommend that Google indexes some keywords?

    - by Werewolf
    I've read many articles about SEO. I've tried to implement my knowledge on a site but I haven't gotten good results in 6 months. e.g.: I've used Google Webmaster Tools, sitemaps, title tags, keywords in paragraphs, etc. My Alexa rank is growing but Google detected some keywords that isn't my goal :-(. Is there a good way to focus on a keyword on search engines? How can I recommend Google to index some desired keywords? (They are available in my pages.)

    Read the article

  • Join Us at Oracle OpenWorld Latin America (Dec 4-6)

    - by Zeynep Koch
    Hello to all Latin Americans,  Oracle Openworld Latin America is starting tomorrow. Oracle Linux will be showcased in different sessions and in the exhibition area. Here's some of the links and details to our sessions: Session Schedules: http://www.oracle.com/openworld/lad-en/session-schedule/index.html Oracle Linux sessions: New Features in Oracle Linux: A Technical Deep Dive,    Dec 4, 13:30-14:30, Mezzanine Room 7 Oracle Linux Strategy and Roadmap,   Dec 4, 17:15-18:15, Mezzanine Room 5 Oracle OpenWorld Latin America Exhibition Halls Hours Tuesday, December 4 12:00–19:3018:15–19:30 (Dedicated Hours)Wednesday, December 511:00–19:3018:30–19:30 (Dedicated Hours)Thursday, December 6 11:00–19:0017:45–19:00 (Dedicated Hours) We will also hand out the following in our booth, don't forget to visit us: - Oracle Linux and Oracle VM DVD Kit  - Server Virtualization for Dummies  See you there :)

    Read the article

  • Wordpress Multisite Network installation and dev questions

    - by Daitya
    Please go easy on me. I'm a clutzy dinosaur. I currently have a large, unwieldy website hand-coded in html/css with php includes. It currently has a single WP installation in a subdirectory. The plan is to reorganize, and I want to use WP as the CMS and incorporate 3 WP blogs for 3 subdomains. Ideally, would like to create a WP multisite network to allow for further expansion and to save admin trouble. I just want to confirm that if I install WP in the root directory and create 3 blogs (in subdomains), does this mean my website's home page is the mother blog's index.php? Essentially, I will have created 4 blogs - mother at root and 3 children in subdomains? How to set this up on my Mac (OSX 10.5.8) running MAMP for development? And then how to migrate to server without breaking?

    Read the article

  • What is the proper way to create a cross-fade effect? [closed]

    - by Starx
    When creating an image slider, using a cross fade is one of more popular effects. Various sliders use differing techniques to create such an effect. Two techniques I've found so far are: Use an overlay and underlay <div> and fade in and out each other's visibility. Create a <div> matching the exact size of the slider during initialization, play with its z-index property, and then fade each other. Is there a better way to create this effect?

    Read the article

  • Flash site loads slowly

    - by bogdanvursu
    I have a simple html page that embeds an swf, that downloads other xml, swf and image files. The total count of the requests reaches about 90. I am aware that it should take a while until the content is available and I am OK with that. All the needed files are hosted by two different providers in the US: flashxml.net/monochrome-demo.html and u1.flashcomponents.net/samples/8751/index.html From two different countries in Europe, the content shows up a lot later (almost twice as later) from flashxml, than flashcomponents. I've done mtr tests and the ping difference is about 40ms and the flashxml server load is below 1. Do you have any other suggestions as to what should I look at?

    Read the article

  • Hosting a magnet link site which could possibly infringe copyrighted material?

    - by Griff
    I have for the last 3 months built a crawler, indexer and alot of other things for what started out to be a home project for indexing magnet links on the internet. As my project grew I have thought about releasing my collected data (which at the minute is on a public domain but with no access) to the public. Whatever the crawler sucks in goes in, and whatever the indexer decides to index gets indexed as it is a fully automated process. My question is as follows; Considering that most of the data that is collected from what I have built points to illegal copyrighted material (as most magnet links do) where would it be best to host such a site. I notice all of the already public torrent sites are hosted in India is this because there laws are less strict on copyright infringement? Have any of you hosted such a site, and if so what problems have you ran into? And as always any advice on being a webmaster for this type website?

    Read the article

  • How to direct a Network Solutions domain name to an html website hosted on Google Drive? [on hold]

    - by Air Conditioner
    To begin with, I'd wanted to take advantage of HTML, CSS, and so on to build a website that looks and works just as I'd like it to. I took a look around on how I could make that work, and I soon saw a lifehacker article showing that its possible to host website files on google drive. I then made sure that the folder containing the files was shared publicly throughout the web, and I now have a working 'google drive hosted' domain for the website. However, I did want to have the custom domain, and so I registered one with network solutions. So now, I'm curious on how I should direct my Network Solutions domain to the index.html I'm hosting on google drive. Would anyone have an Idea?

    Read the article

  • T-SQL Tuesday: What kind of Bookmark are you using?

    - by Kalen Delaney
    I’m glad there is no minimum length requirement for T-SQL Tuesday blog posts , because this one will be short. I was in the classroom for almost 11 hours today, and I need to be back tomorrow morning at 7:30. Way long ago, back in SQL 2000 (or was it earlier?) when a query indicated that SQL Server was going to use a nonclustered index to get row pointers, and then look up those rows in the underlying table, the plan just had a very linear look to it. The operator that indicated going from the nonclustered...(read more)

    Read the article

  • What measures can be taken to make sure Google is aware of the existence of a newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. I want to get important-page.html indexed as soon as possible after 12:00. Ideally within seconds or minutes. What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • "Unable to connect" to getdeb.net, how do I fix it?

    - by Nirmik
    i want to know what this error means and how to fix it? The following is the output on the terminal- W: Failed to fetch http://archive.getdeb.net/ubuntu/dists/precise-getdeb/Release.gpg Unable to connect to archive.getdeb.net:http: W: Failed to fetch http://archive.getdeb.net/ubuntu/dists/precise-getdeb/apps/binary-i386/Packages Unable to connect to archive.getdeb.net:http: W: Failed to fetch http://archive.getdeb.net/ubuntu/dists/precise-getdeb/apps/i18n/Translation-en_IN Unable to connect to archive.getdeb.net:http: W: Failed to fetch http://archive.getdeb.net/ubuntu/dists/precise-getdeb/apps/i18n/Translation-en Unable to connect to archive.getdeb.net:http: E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Git doesn't sync files until committed, even if checked out in a different branch

    - by DertWaiter
    Okay, I have git 1.7.11.1 on Windows and I have a local test repository with 2 branches. One is master with index.php and help.php. I then create another branch called slave :) I run from git bash rm help.php and it disappears from the folder, but I don't stage anything. I switch to checkout master branch and it is supposed to restore file help.php because it is not modified in the master branch, isn't it? And it does not do it. When I go back to the slave branch and commit and then switch to checkout master then help.php appears. Is that the way it is supposed to to work? Why?

    Read the article

  • Procurement Search Helpers

    - by Oracle_EBS
    To access all our Procurement Search Helpers see Doc ID 1391332.2 our Procurement Information Center Index, then click on Purchasing under Procurement Suite. Here you will see links to our Procurement Search Helpers: Search Helpers provide a collection of solutions based on the symptoms you enter. Try these before logging a Service Request.  If you are not sure how to use Search Helpers, click on 'About this Note' in each document. Current Procurement Search Helpers: Doc ID Search Helper Title 1361856.1  EBS : Purchase Order and Requisition Approval Search Helper (In Process or Incomplete Status) 1377764.1 EBS : PO Output for Communication / Supplier Notification Issues Search Helper 1364360.1 EBS : Requisition To Purchase Order Search Helper 1369663.1 EBS : Purchase Document Open Interface and API Search Helper 1391970.1 EBS : Search Helper for RVTII-060 Errors in Receiving 1394392.1 EBS : Purchasing Buyer Work Center Search Helper 1470034.1 EBS : Document Control Issues Search Helper

    Read the article

  • Google Analytics - bad experiences? (esp. adult content)

    - by Litso
    I work for a rather large adult website, and we're currently not using Google Analytics. There is an internal debate going on about whether we should start using Analytics, but there is hestitation from certain parties. The main argument is that they fear that Google will get too much insight into our website, and might even block us from the index as a result based on our adult content. Has anyone here ever had such an experience, or know stories about bad experiences with Google Analytics in such a manner? I personally think it will only improve our website if we were able to use Analytics, but the dev team was asked to look into possible negative effects. Any help would be appreciated.

    Read the article

  • How to fix "The system is running in low-graphics mode" error?

    - by jokerdino
    Note: This is an attempt to create a canonical question that covers all instances of "low-graphics mode" error that occurs to a user, including but not limited to installation of wrong drivers, incorrect or invalid lightdm greeters, low disk space, incorrect installation of graphics card like ATI and Nvidia, incorrect configuration of xorg.conf file while setting up multiple monitors among others. If you are experiencing the "low-graphics mode" error when trying to login but none of the following answers work for you, please do ask a new question and then update the answers of this canonical question as and when your new question gets answered. When I try to boot into my computer, I am getting this error: The system is running in low-graphics mode Your screen, graphics cards, and input device settings could not be detected correctly. You will need to configure these yourself. How do I fix the failsafe X mode and login into my computer? Answer index: The greeter is invalid

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >