Search Results

Search found 16107 results on 645 pages for 'fulltext search'.

Page 238/645 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Is it possible to transfer Google Authorship?

    - by Stephanus Yanaputra
    Is it possible to transfer Google Authorship from one account to another? Scenario: I have user A who is a legitimate Google Author in my WordPress site. When I search the 'keyword' in Google, his name and photo will show up in the search result. One day A left the company, so we don't want to use his name again as an author, and transfer it to another person which is B. Technically speaking, I can just alter the display name and Google+ Profile URL of A to B. And then I probably can notify Google of the changes happening. But what will happen then? Will Google get confused and think that I'm doing scam? Is doing this action even correct in the first place?

    Read the article

  • Google won't display site

    - by Markasoftware
    My website (markasoftware.getenjoyment.net) doesn't seem to be indexed properly by Google (I haven't tried other search engines). When I type in the URL of my site it appears right at the top of the list like it should. When I type in the entire contents of the title, however, the site doesn't appear! The title is quite long (Thermonuclear War Game Online: Thermonuclear War By Mark) and it has little (if any) competition. Have I been punished by Google for some reason, or is it something else? I have received zero hits from search engines. Can someone tell me why my site down't appear?

    Read the article

  • Why won't apt-get install anything after I deleted its cached lists?

    - by Gernot
    Recently I had a problem with the update-manager and search for a solution. On my search I found a post, where someone had the same problem and as solution they told him he should run this command on the terminal: sudo rm /var/lib/apt/lists/* I also run this, and the update-manager worked again. But now, I noticed that apt-get won't install anything. I wanted to install rvm (for ruby) and therefore I needed a few packages (build-essential and curl to be precise). But if I tried to install them, I always get the message that there is no installcandidate for that package... . What can I do to get apt-get working again?

    Read the article

  • How to modify grub entry for supporting KGDB kernel image?

    - by Nishant
    I am trying to update target m/c grub.cfg file for KGDB setup but while booting the m/c it got hung completely and not asking/waiting for remote gdb connection. Following is the entry which I added:- menuentry 'Ubuntu, with Linux 2.6.32-24-kgdb' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set 12878c3b-c553-4b4b-986a-6e32daea3ad1 linux /vmlinuz-2.6.32-kgdb root=/dev/mapper/ubuntu-root ro kgdbwait [email protected]/,@192.168.140.158/ quiet initrd /initrd.img-2.6.32-24-server } I have also compiled and copied /boot/vmlinuz-2.6.15.5-kgdb & /boot/System.map-2.6.15.5-kgdb to target m/c from devlopement m/c. STD entry before adding KGDB in grub.cfg was:- menuentry 'Ubuntu, with Linux 2.6.32-24-server' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set 12878c3b-c553-4b4b-986a-6e32daea3ad1 linux /vmlinuz-2.6.32-24-server root=/dev/mapper/ubuntu-root ro quiet initrd /initrd.img-2.6.32-24-server } Please suggest how to get rid of this problem.

    Read the article

  • Images not indexed by google since moving to cdn

    - by dfunkydog
    Last week I moved all the images on coffeeandvanilla.com to a cdn( maxcdn.coffeeandvanilla.com ). The problem I'm having is that although the sitemap—generated by yoast wordpress seo plugin—points images to the correct location, google only indexes[sic] images from the category and page site maps but 0 images from the posts sitemap( see screenshot https://dl.dropbox.com/u/4635252/sitemap.png ) This website has been doing quite well with google image-search before the change, visits from google image search have dropped from ~200/day to 11 yesterday Here is an example entry from the generated posts.xml sitemap http://pastebin.com/vcMRf9VW Can anyone suggest where the problem lies? Why have I lost all my google image juice? Should I just wait some more, how long before really worrying?

    Read the article

  • Google locking on Ubuntu

    - by user170534
    Problem I'm facing is that Google doesn't respond well timed to connection requests send from any browsers known to Linux. As far as I can tell, this was existent in Mint, which is Ubuntu based. I have no debug or guess about cause but I'm sure there are people with the same problem. ping of terminal is untouched but any other browser keeps unloaded, for example; google loads fine, I search for something. Then I decide to search for something else and ta daa: You gotta wait for 30 seconds for Google server to respond. I tried using google's public DNS without success. Flare the suggestions & ideas!?

    Read the article

  • SEO tool is telling me title, description and keywords don't exist, but they do. Where is the problem?

    - by DaveDev
    I'm using the following tool to analyse how 'optimal' a site that I'm working on is for search engines: http://tools.seobook.com/general/spider-test/ I enter the URL for the site - http://ftmsuat.moneymate.com - into the search bar, and it returns a breakdown of the contents of the page. I'm a little confused by what I see though. According to the results, the page doesn't have a title, description or keywords. But if you check the source of the page, those elements are definitely there. So I'm wondering now, which is wrong? seobook.com or my page?

    Read the article

  • Are there any tweaks for fixing the appearance of Eclipse Juno on ubuntu?

    - by agnul
    As we have previously established ;-) running Eclipse on ubuntu is a bit disappointing on the UI side. Things are even worse now that Juno is out. Are there any tweaks specific to Gtk3 and Juno that help make things better? The new UI maybe needs some getting used to, but I'm not convinced. Padding got much worse with all the extra (useless?) space between panes. The gradient on the toolbar looks ugly, the quick search looks like it needs some more polish, the buttons to switch perspectives maybe would look nicer without the quick search bar, tabs are waaay to big. Not sure the color scheme has been fixed since I'm running a modified theme for the sake of old 3.7 (the infamous white on black tooltips)

    Read the article

  • Can I include a robots meta tag outside of the head in HTML snippets indeded to be SSIed?

    - by Dan
    I have a number of files in my site which are not intended for independent viewing, but rather to be AJAXed into content within the site. They obviously don't meet HTML standards (no body, head, etc.) as independent entities. I would like to prevent search engines from indexing these pages, but do not have access to /robots.txt (which would be much more ideal). My question is, could I include the following at the top of these partial HTML files and get the desired results? <meta name="robots" content="noindex, noarchive"> I guess there are two parts to this question. Will this cause any rendering issues in any browsers? Will search engines (at least Google & Bing) interpret this as intended?

    Read the article

  • Kooboo CMS 2.1.1.0 released

    New features Add new API RssUrl to generate RSS link, this is an extension to UrlHelper. Add possibility to index and search attachment content on Lucene full text search engine, some of the attachment requires ifilter component from Microsoft. Supported file attachments include: .docx, .docm, .pptx, .pptm, .xlsx, .xlsm, .xlsb, .zip, .one, .vdx, .vsd, .vss, .vst, .vdx, .vsx, and .vtx.Please download and install ifilter from: http://www.microsoft.com/downloads/details.aspx?FamilyId=60C92A37-719C-4077-B5C6-CAC34F4227CC&displaylang=enFor...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to set the initial component focus

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In ADF Faces, you use the af:document tag's initialFocusId to define the initial component focus. For this, specify the id property value of the component that you want to put the initial focus on. Identifiers are relative to the component, and must account for NamingContainers. You can use a single colon to start the search from the root, or multiple colons to move up through the NamingContainers - "::" will pop out of the component's naming container and begin the search from there, ":::" will pop out of two naming containers and begin the search from there. Alternatively you can add the naming container IDs as a prefix to the component Id, e.g. nc1:nc2:comp1. http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_document.html To set the initial focus to a component located in a page fragment that is exposed through an ADF region, keep in mind that ADF Faces regions - af:region - is a naming container too. To address an input text field with the id "it1" in an ADF region exposed by an af:region tag with the id r1, you use the following reference in af:document: <af:document id="d1" initialFocusId="r1:0:it1"> Note the "0" index in the client Id. Also, make sure the input text component has its clientComponent property set to true as otherwise no client component exist to put focus on.

    Read the article

  • robots.txt, how effective is it and how long does it take?

    - by Stefan
    We recently updated the site to a single page site using jQuery to slide between "pages". So we now have only index.php. When you search the company on engines such as Google, you get the site and a listing of its sub pages which now lead to outdated pages. Our plan doesn't allow us to edit the .htaccess and the old pages are .html docs so I cannot use PHP redirects either. So if I put in place a robots.txt telling the engines to not crawl beyond index.php, how effective will this be in preventing/removing crawled sub pages. And rough guess, how long before the search engines would update?

    Read the article

  • Is there any advantage/disadvantage to using robots.txt to disallow access to legal pages such as terms, privacy policy, etc.?

    - by CaptainCodeman
    As I understand, having repetitive content is a detriment to search engine placement. Given that many websites that use similar or even identical "Terms and Conditions" and "Privacy Policy" pages due to similar legal wording or due to copy & pasting from the same source, would it be a good idea to disallow access to these pages via robots.txt, in order to avoid being penalized for "non-original content"? Or, on the contrary, could the search engines identify this as circumvention and penalize the site for trying to hide content? Or does it not matter?

    Read the article

  • Does SEO optimisation count on the responsive side of a site?

    - by Rick Donohoe
    I'm looking at making some SEO optimisation fixes, and at this point I'm sorting out the heading structure and keywords - H1's, H2's etc We have a site where there are a number of similar blocks, and one is always visible, and one is hidden depending on the screen size. This is our method of making a single site responsive. Firstly, how does this technique affect the SEO, and in general does the responsive side of a site matter at all to search engines? What I mean by this is if the site has different content depending on screen sizes, then which content would the search spider crawl?

    Read the article

  • Flowchart for solving programming problems

    - by nurne
    I noticed that every developer implements a somewhat different flowchart for solving programming problems. By flowchart I mean a defined system of techniques that the developer goes through in a certain sequence, trying to solve the problem at hand. Some examples for techniques: Google "how to..." or "... tutorial". Search the java/msdn/apple/etc API doc for the specific class or method. Search in stack overflow the exact problem with some tags like [iphone]/[java] etc. Take a nap and let the subconscious work. Debug. Draw the algorithm or system. Google the logged error message. Ask a colleague or manager. Ask a new question in stack overflow. From your experience, what is the best flowchart for solving a programming problem?

    Read the article

  • I've changed my URL schema. How do I tell Google to index the new schema and forget the old one?

    - by growse
    I had a site where the urls were constructed like this /index.php/Topic /index.php/AnotherTopic These were indexed in google, and search results returned that pointed to these. However, I've recently replatformed that site, and reconfigured it so the above urls would be: /index.php?title=Topic /index.php?title=AnotherTopic The original urls are returning 404s. The site is linking to the correct URL schema internally, but Google is retaining the original schema in its search results. I've updated and resubmitted the sitemap which only contains the new schema. Also, Google's webmasters tool is going slightly bananas at the fact there's now a spike in 404 errors in its crawl results. What would be the best approach to get Google to 'forget' about the old schema, and instead index the new schema? Should I try blocking /index.php/ in robots.txt? Should I be returning 301 codes instead of 404 for the original urls?

    Read the article

  • ERROR CHECKING !!

    - by moata_u
    am trying catch any error when run command in order to write an log file / report i was trying write this code : FUNCTION FOR VALIDATION function valid (){ if [ $? -eq 0 ]; then echo "$var1" ": status : OK" else echo "$var1" ": status : ERROR" fi COMMAND FUNCTION function save(){ sed -i "/:@/c connection.url=jdbc:oracle:thin:@$ip:1521:$dataBase" $search var1="adding database ip" valid $var1 sed -i "/connection.username/c connection.username=$name" #$search retval=$? var1="addning database SID" valid $var1 $retval } save OUTPUT adding database ip : status : OK sed: no input file i want out put in this way: adding database ip : status : OK sed: no input file : status : ERROR" (OR) adding database ip : status : OK addning database SID : status : ERROR" I was tried toooo much but not working with me :(((

    Read the article

  • Google Maps keeps displaying in Spanish

    - by Ken Hortsch
    Originally posted on: http://geekswithblogs.net/BlueProbe/archive/2013/11/12/154610.aspxIn Chrome I use Google Maps as a search provider.  That way I can just type maps for the URL address, hit a couple of tab keys, and enter the maps address and have the page rendered with my map.  Now periodically maps were displaying in Spanish with a "click here to translate to English” option.  Huh?  My language settings on the browser and within Google settings all were English.  Turns out I had set my Chrome search provider string to include a language query parm=es.  Why would I do that?  Evil twin perhaps.

    Read the article

  • How to remove Recent Item boorkmark from Gnome shell standard "save windows"

    - by Kiwy
    I search all around the internet till the 6th page of result on google with very precise search, but I can't figure how I can do that. I'm working with last updated ubuntu 12.04 and gnome shell, and I wonder how I can REMOVE and I say remove not clear or avoid feeding but remove completly the "recent item" bookmark you can see in the standard save windows of gnome shell here's a picture (in french it's "Récemment utilisés"): Sorry not enought point to post image to see what I talk about, just do that: -open gedit -type anything -save your file -Now that windows got a "recent item" and I want it DOWN I cannot reward point, but I would if I could for the guy finding a solution, and a bonus point if you find a way to remove it every where it appears in gnome shell. Thank you for time. Antoine

    Read the article

  • Asynchronously returning a hierarchal data using .NET TPL... what should my return object "look" like?

    - by makerofthings7
    I want to use the .NET TPL to asynchronously do a DIR /S and search each subdirectory on a hard drive, and want to search for a word in each file... what should my API look like? In this scenario I know that each sub directory will have 0..10000 files or 0...10000 directories. I know the tree is unbalanced and want to return data (in relation to its position in the hierarchy) as soon as it's available. I am interested in getting data as quickly as possible, but also want to update that result if "better" data is found (better means closer to the root of c:) I may also be interested in finding all matches in relation to its position in the hierarchy. (akin to a report) Question: How should I return data to my caller? My first guess is that I think I need a shared object that will maintain the current "status" of the traversal (started | notstarted | complete ) , and might base it on the System.Collections.Concurrent. Another idea that I'm considering is the consumer/producer pattern (which ConcurrentCollections can handle) however I'm not sure what the objects "look" like. Optional Logical Constraint: The API doesn't have to address this, but in my "real world" design, if a directory has files, then only one file will ever contain the word I'm looking for.  If someone were to literally do a DIR /S as described above then they would need to account for more than one matching file per subdirectory. More information : I'm using Azure Tables to store a hierarchy of data using these TPL extension methods. A "node" is a table. Not only does each node in the hierarchy have a relation to any number of nodes, but it's possible for each node to have a reciprocal link back to any other node. This may have issues with recursion but I'm addressing that with a shared object in my recursion loop. Note that each "node" also has the ability to store local data unique to that node. It is this information that I'm searching for. In other words, I'm searching for a specific fixed RowKey in a hierarchy of nodes. When I search for the fixed RowKey in the hierarchy I'm interested in getting the results FAST (first node found) but prefer data that is "closer" to the starting point of the hierarchy. Since many nodes may have the particular RowKey I'm interested in, sometimes I may want to get a report of ALL the nodes that contain this RowKey.

    Read the article

  • Ubuntu 12.10 webapps don't open correctly when using Chromium

    - by Alex
    I have just installed Ubuntu 12.10 completely fresh, the old version of Ubuntu was discarded or overwritten (or whatever you call it). I want to use the Ubuntu webapps with Chromium but I've had several problems. }The first problem is that Chromium won't ask me if I want to install a webapp if I go to a supported site (and I don't already have the webapp installed). The second problem is that when I install the webapp by visiting the site in Firefox, and then I try to open it in Chromium, Ubuntu will open a completely new Chromium icon and window in the Launcher, and the icon will be labeled "Untitled"; also there is no search bar in the new window, only the tab at the top. I've tried using several webapps with Firefox set as the default browser and they work as expected: once the webapp icon is clicked a Firefox window is opened on the Firefox launcher icon, and the window has 'new tab' button and search bar.

    Read the article

  • Subtext 2.5 is released!

    After more then one year since last release, we are happy to announce that the new version of Subtext, number 2.5, has just been released. The main features are the new dashboard, featuring the Ayendes formula for blog post popularity, and a improved site-wide search based on Lucene.net. More about that will be published shortly, in the meantime you can read how the search engine has been implemented using Lucene.net. Lots of improvements have been made to the codebase of Subtext for this release: ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What does path finding in internet routing do and how is it different from A*?

    - by alan2here
    Note: If you don't understand this question then feel free to ask clarification in the comments instead of voting down, it might be that this question needs some more work at the moment. I've been directed here from the Stack Excange chat room Root Access because my question didn't fit on Super User. In many aspects path finding algorithms like A star are very similar to internet routing. For example: A node in an A* path finding system can search for a path though edges between other nodes. A router that's part of the internet can search for a route though cables between other routers. In the case of A*, open and closed lists are kept by the system as a whole, sepratly from any individual node as well as each node being able to temporarily store a state involving several numbers. Routers on the internet seem to have remarkable properties, as I understand it: They are very performant. New nodes can be added at any time that use a free address from a finite (not tree like) address space. It's real routing, like A*, there's never any doubling back for example. Similar IP addresses don't have to be geographically nearby. The network reacts quickly to changes to the networks shape, for example if a line is down. Routers share information and it takes time for new IP's to be registered everywhere, but presumably every router doesn't have to store a list of all the addresses each of it's directions leads most directly to. I'm looking for a basic, general, high level description of the algorithms workings from the point of view of an individual router. Does anyone have one? I presume public internet routers don't use A* as the overheads would be to large, and scale to poorly. I also presume there is a single method worldwide because it seems as if must involve a lot of transferring data to update and communicate a reasonable amount of state between neighboring routers. For example, perhaps the amount of data that needs to be stored in each router scales logarithmically with the number of routers that exist worldwide, the detail and reliability of the routing is reduced over increasing distances, there is increasing backtracking involved in parts of the network that are less geographically uniform or maybe each router really does perform an A* style search, temporarily maintaining open and closed lists when a packet arrives.

    Read the article

  • How to shrink Windows partition with unmovable files in dual boot installation

    - by Tim
    To install Ubuntu alongside Windows 7, I have to shrink Windows 7 partition C:. But due to some unmovable files, I cannot shrink as much as I plan by using Windows own shrinking tool. I guess many of you who have both OSes on the same hard drive must have similar experience. How to solve this problem? Any reference that can help is also appreciated! Thanks and regards! UPDATE: I have identified what unmovable file currently stop further shrinking: \ProgramData\Microsoft\Search\Data\Applications\Windows\Projects\SystemIndex\Indexer\CiFiles\00010015.wid::$DATA If I understand correctly, the file belongs to Windows Search. Can I set up somewhere in Windows system settings to temperately eliminate the file and similar ones (because there are many similar files under the same directory which I guess will also stand in the way of shrinking and unmovable by defrag)?

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >