Search Results

Search found 50150 results on 2006 pages for 'page search'.

Page 73/2006 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • How can I search an XML file without a dynamic language?

    - by jeph perro
    Let me try to explain my situation: We are using a CMS which 'bakes' a website, and you publish it to a webserver. The published site contains only static HTML ( or XML ) pages ( generated from the content in the CMS database ). I imported an XML file with the names and phone numbers from the company phone directory. Using only XSLT, can I create a way to search that directory? For example, if my XML file, directory.xml looks like this: <directory> <person> <fname>Ryan</fname> <lname>Purple</lname> <phone>887 778 5544</phone> </person> <person> <fname>Tanya</fname> <lname>Orange</lname> <phone>887 998 5541</phone> </person> <directory> Can I create a way to search for a person with the last name starting with "Pur" ? Can I pass a parameter to the XSLT? Can I search the XML tree to match the string in the parameter?

    Read the article

  • How to structure a multilingual website for search engines?

    - by Nirmal
    I have this website which decides on the display language by a GET parameter. http://www.mysite.com/index.php?page=home&locale=en which is rewritten as http://www.mysite.com/en/home When no language is specified, the system defaults to English (en). Now how do I tell the search engines that many versions of the website exist? When the search bot enters the site, it will trigger the default English Language and after finishing, will just leave the site without considering other languages. I can very well have a sitemap with links to the default pages of each language, so the bot can navigate from there. But how do I say the bot that the entry in the sitemap is the home page for that language? Like if someone searches for 'mi sitio', they should be presented with the result http://www.mysite.com/es/home and not some other internal page. Any light on this? Thanks.

    Read the article

  • Is there a command-line utility app which can locate a specific block of lines in a text file?

    - by fred.bear
    The text "search and replace" utility programs I've seen, seem to only search on a line-by-line basis... Is there a command-line tool which can locate one block of lines (in a text file), and replace it with another block of lines.? For example: Does the test file file contain this exact group of lines: 'Twas brillig, and the slithy toves Did gyre and gimble in the wabe: All mimsy were the borogoves, And the mome raths outgrabe. 'Beware the Jabberwock, my son! The jaws that bite, the claws that catch! Beware the Jubjub bird, and shun The frumious Bandersnatch!' I want this, so that I can replace multiple lines of text in a file and know I'm not overwriting the wrong lines. I would never replace "The Jabberwocky" (Lewis Carroll), but it makes a novel example :)

    Read the article

  • How long does it take for Google Webmasters to index site after submitting sitemap? [closed]

    - by Venkatesh Hodavdekar
    Possible Duplicate: Why isn't my website in Google search results? I have submitted my website today into Google search using Google Webmasters using sitemaps. The status on the sitemap says OK and it shows that 12 urls have been recognized. I was wondering how long does it take for the link to get indexed, as the indexed url option says "No data available. Please check back soon." I am not sure if it is showing this message due to some error, or everything is fine.

    Read the article

  • Directing crawlers to content in language per language sub-domain

    - by Noam
    I have a site with multilingual website with many pages (40M). The site has UGC, and each translation is actually for the titles. Each sub-domain points to the same content with different titles per language. As far as I understand, each sub-domain should be indexed by search engines, meaning they will actually need to crawl 40M x supported-languages. So I thought it might be best to direct each subdomain crawler, to pages that are fully in that language (titles + UGC). Is there a way to do this? Should search engines understand this on their own?

    Read the article

  • Can I make query strings produce separate pages?

    - by John Smith
    I have a profile page with a URL like so: localhost/profile.php/?username=Bob I was wondering, if I had a separate <title> which changed according to the username, would they produce separate pages in the google search results? How do I tell Google to only use the username string or does it search within the title? On a similar note, how would I create a separate page with the username like so: localhost/bob instead of a query string like facebook does. Do that make a new file for each user?

    Read the article

  • SQLSaturday #60 - Cleveland Rocks!

    - by Mike C
    Looking forward to seeing all the DBAs, programmers and BI folks in Cleveland at SQLSaturday #60 tomorrow! I'll be presenting on (1) Intro to Spatial Data and (2) Build Your Own Search Engine in SQL. I've reworked the Spatial Data presentation based on feedback from previous SQLSaturday events and added more sample code. I also expanded the Build Your Own Search Engine code samples to demonstrate additional FILESTREAM functionality. See you all tomorrow! A little road music, please! http://www.youtube.com/watch?v=vU0JpyH1gC...(read more)

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • get data from asp page

    - by sam
    I am wondering if there is anyway to grab the html that is generated from an ASP page. I am trying to pull a table from the page, and I foolishly used a static html page so I would not have to be constantly querying the server where this page resides while I tested out my code. The javascript code I wrote to grab to unlabeled table from the page works. Then when I put it into practice with the real page and found that the ASP page does not generate a viewable page with a jquery .get request on the URL. Is there any way to query the page for the table I need so that the ASP page returns a valid page on request? (I am also limited to using javascript and perl for this, the server where this will reside will not run php and I have no desire to learn ASP.NET to solve this by adding to the issue of proprietary software)

    Read the article

  • PG::Error: ERROR: operator does not exist: integer ~~ unknown

    - by rsvmrk
    I'm making a search-function in a Rails project with Postgres as db. Here's my code def self.search(search) if search find(:all, :conditions => ["LOWER(name) LIKE LOWER(?) OR LOWER(city) LIKE LOWER(?) OR LOWER(address) LIKE LOWER(?) OR (venue_type) LIKE (?)", "%#{search}%", "%#{search}%", "%#{search}%", "%#{search}%"]) else find(:all) end end But my problem is that "venue_type" is an integer. I've made a case switch for venue_type def venue_type_check case self.venue_type when 1 "Pub" when 2 "Nattklubb" end end Now to my question: How can I find something in my query when venue_type is an int?

    Read the article

  • Is there a word or description for this type of query?

    - by Nick
    We have the requirement to find a result in a collection of records based on a prioritised set of search criteria against a relational db (I'm talking indexed field matching here rather than text search). The way we are thinking about designing the query is to begin with a highly refined and specific set of criteria. If there are no results for this initial query we want to progressively reduce the criteria one by one in order of reducing priority, querying each time such a less specific set of criteria until we find a result we can accept. Alternatively, we have considered starting with a smaller set of criteria and increasing until we have reduced number of results down to the last set. What I would like to know is if an existing term to describe this type of query exists? So that we can look to model our own on existing patterns and use best practice.

    Read the article

  • SQLSaturday #60 - Cleveland Rocks!

    - by Mike C
    Looking forward to seeing all the DBAs, programmers and BI folks in Cleveland at SQLSaturday #60 tomorrow! I'll be presenting on (1) Intro to Spatial Data and (2) Build Your Own Search Engine in SQL. I've reworked the Spatial Data presentation based on feedback from previous SQLSaturday events and added more sample code. I also expanded the Build Your Own Search Engine code samples to demonstrate additional FILESTREAM functionality. See you all tomorrow! A little road music, please! http://www.youtube.com/watch?v=vU0JpyH1gC...(read more)

    Read the article

  • How can I most accurately calculate the execution time of an ASP.NET page while also displaying it o

    - by henningst
    I want to calculate the execution time of my ASP.NET pages and display it on the page. Currently I'm calculating the execution time using a System.Diagnostics.Stopwatch and then store the value in a log database. The stopwatch is started in OnInit and stopped in OnPreRenderComplete. This seems to be working quite fine, and it's giving a similar execution time as the one shown in the page trace. The problem now is that I'm not able to display the execution time on the page because the stopwatch is stopped too late in the life cycle. What is the best way to do this?

    Read the article

  • Address Book Authentication

    - by Gus E
    I just upgraded to Ubuntu 14.04.1 and run Gnome Shell. I am consistently getting a pop up window prompting me for my Gmail address book authentication. The window pops up the moment I type something into gnome shell after hitting the super key. I'm assuming that Ubuntu wants to search my address book for people to include in the search. I have opened up the settings and deleted my account from the online accounts section and rebooted, nothing seems to stop the popup. Where is it getting my email address from? Most importantly, how to I stop this super annoying popup from appearing?

    Read the article

  • MySQL searching using many 'like' operators: is there a better way?

    - by DrAgonmoray
    I have a page that gets all rows from a table in a database, then displays the rows in an HTML table. That works great, but now I want to implement a 'search' feature. There is a searchbox, and search-terms are separated by a space. I am going to make it search three fields for the search terms, 'make' 'model' and 'type.' These three fields are VARCHAR(30). Currently if I wanted to search using 3 terms (say 'cool' 'abc' and '123') my query would look something like this. SELECT * FROM table WHERE make LIKE '%cool%' OR make LIKE '%abc%' OR make LIKE '%123%' OR model LIKE '%cool%' OR model LIKE '%abc%' OR model LIKE '%123%' OR type LIKE '%cool%' OR type LIKE '%abc%' OR type LIKE '%123%' That looks really bad, and it will get even worse if there are more search terms or more fields to search. My question to you: is there a better way to search? If so, what?

    Read the article

  • Will_paginate Plugin on two objects on same page

    - by piemesons
    Hello I am using will_paginte plugin on two objects on a same page. Like on stackoverflow. There is a profile page on which there is a pagination on two things QUestions and answers. I am having problem ie:-- when user is clicking on questions pagination page 2. answers page are also updating. The reason is both is sending a post variable ie params[:page] How to change this variable so that only one should be updated. and how to maintain that user should not lose the other page. ie he is on 3rd page of questions and 1st page of answers and now he click on 5th page of the questions the result should be 3rd page of questions and 5th page of answers.

    Read the article

  • Writing a spell checker similar to "did you mean"

    - by user888734
    I'm hoping to write a spellchecker for search queries in a web application - not unlike Google's "Did you mean?" The algorithm will be loosely based on this: http://catalog.ldc.upenn.edu/LDC2006T13 In short, it generates correction candidates and scores them on how often they appear (along with adjacent words in the search query) in an enormous dataset of known n-grams - Google Web 1T - which contains well over 1 billion 5-grams. I'm not using the Web 1T dataset, but building my n-gram sets from my own documents - about 200k docs, and I'm estimating tens or hundreds of millions of n-grams will be generated. This kind of process is pushing the limits of my understanding of basic computing performance - can I simply load my n-grams into memory in a hashtable or dictionary when the app starts? Is the only limiting factor the amount of memory on the machine? Or am I barking up the wrong tree? Perhaps putting all my n-grams in a graph database with some sort of tree query optimisation? Could that ever be fast enough?

    Read the article

  • What factors help in getting a site indexed by Google fast?

    - by ekaj
    What should I do to get a site indexed on Google, fast? Taking example of Super User I just did a quick Google for a question that was 20 minutes old, to look for an answer, and it was already on Google Search - how is this possible? I glanced over this article which seems to suggest that SU has added RSS feeds (which SU has, but when I opened the feed the article says last posted 6 minutes ago, but when Googled it is 11 hours old) - which leads me to think (Based on that article, I don't know much about search indexing but I am reading at the moment) that most of this indexing is done thanks to the sitemap. is there anything else I am unaware of that helps SU questions get on Google so fast?

    Read the article

  • How can I make my Google Maps api v3 address search bar work by hitting the enter button on the keyboard?

    - by Gavin
    I'm developing a webpage and I would just like to make something more user friendly. I have a functional Google Maps api v3 and an address search bar. Currently, I have to use the mouse to select search to initialize the geocoding function. How can I make the map return a placemark by hitting the enter button on my keyboard? I just want to make it as user-friendly as possible. Here is the javascript and div, respectively, I created for the address bar: var geocoder; function initialize() { geocoder = new google.maps.Geocoder (); function codeAddress () { var address = document.getElementById ("address").value; geocoder.geocode ( { 'address': address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { map.setCenter(results [0].geometry.location); marker.setPosition(results [0].geometry.location); map.setZoom(14); } else { alert("Geocode was not successful for the following reason: " + status); } }); } <div id="geocoder"> <input id="address" type="textbox" value=""> <input type="button" value="Search" onclick="codeAddress()"> </div> Thank you in advance for your help

    Read the article

  • How to implement a search page which shows results on the same page?

    - by Andrew
    I'm using ASP.NET MVC 2 for the first time on a project at work and am feeling like a bit of a noob. I have a page with a customer search control/partial view. The control is a textbox and a button. You enter a customer id into the textbox and hit search. The page then "refreshes" and shows the customer details on the same page. In other words, the customer details appear below the customer search control. This is so that if the customer isn't the right one, the user can search again without hitting back in the browser. Or, perhaps they mistyped the customer id and need to try again. I want the URL to look like this: /Customer/Search/1 Obviously, this follows the default route in the project. Now, if I type the URL above directly into my browser, it works fine. However, when I then use the search control on that page to search for say customer 2, the page refreshes with the correct customer details but the URL does not change! It stays as /Customer/Search/1 When I want it to be /Customer/Search/2 How can I get it to change to the correct URL? I am only using the default route in Global.asax. My Search method looks like this: <AcceptVerbs(HttpVerbs.Get)> _ Function Search(ByVal id As String) As ActionResult Dim customer As Customer = New CustomerRepository().GetById(id) Return View("SearchResult", customer) End Function

    Read the article

  • asp.net could a half submitted web page be processed?

    - by c00ke
    Having a weird bug in production and just wondering if it's possible for a half submitted web page to processed by the server? The page has no view state just using plain old html controls and accessing data displayed in repeater on the back end via Request.Form[name] etc. Is it possible for a request to be truncated perhaps due to lost internet connection and the page still processed by the server. Therefore if field not part of the request Request.Form[name] could result in null? I know can use fiddler to modify request but unfortunately we are not allowed to change group policy and change the proxy! Many Thanks

    Read the article

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • Multilingual website without language component in the URL

    - by user359650
    I'm working on a website for Canada which will have French and English versions. For SEO purposes, I would like to avoid using any language tag in URLs because I believe it will have more impact (e.g. example.ca/products better than en.example.ca/products or example.ca/en/products). I believe this is technically possible because the2 languages are sufficiently different that the URLs won't be conflicting with one another (e.g. if you want a "product" page, it will be /products in English, and /produits in French so you know which language the URL is about). Since Google (and most likely others) doesn't rely on the URL (nor HTML tags) to determine the content language I don't see any problems with search engines. To make this possible I've thought about using a cookie distinct from the session cookie (e.g. example.org_language) with long term expiry (e.g. N years) that will memorize the language chosen by the user. That way when people visit the website with a new browser session, they get served the proper language. I have already given up on users being able to switch one page from English to French: when people will chose English or French from the menu they will be redirected to the corresponding version of the home page. Do you foresee any problems with not using a language component in the URL (whether domain or path)? (as long as one makes sure URLS don't conflict).

    Read the article

  • Multiple Businesses at The Same Physical Address - SEO / Google Places

    - by Howdy_McGee
    I was wondering what kind if there would be any negative effects to have multiple businesses having the exact same physical address on their website. Currently we have five businesses at the exact same address and it shows on their website, so when people google one of the five businesses address their going to get multiple results from multiple website most of which will not be what their looking for. What is a way around this / what can I do about this? Would adding "Suite Numbers" be a solution? A thought occurred that it might be a good idea to create a landing page for users that are looking up a business by it's address via google. The page will bring up multiple businesses since we have a few at the same address but if we have a landing page at the top which then leads to multiple businesses that might solve the multiple address seo problem. Going to keep researching it though. I also know for Google Places (possible bing local and yahoo local) this could also become a problem. I've submitted an inquiry with them but I wanted to know if anybody had a ready-made solution around this so that Google doesn't bunch all these companies together into one. Thanks!

    Read the article

  • What guidelines should be followed when implementing third-party tracking pixels?

    - by Strozykowski
    Background I work on a website that gets a fair amount of traffic, and as such, we have implemented different tracking pixels and techniques across the site for various specific reasons. Because there are many agencies who are sending traffic our way through email campaigns, print ads and SEM, we have agreements with a variety of different outside agencies for tracking these page hits. Consequently, we have tracking pixels which span the entire site, as well as some that are on specific pages only. We have worked to reduce the total number of pixels available on any one page, but occasionally the site is rendered close to unusable when one of these third-party tracking pixels fails to load. This is a huge difficulty on parts of the site where Javascript is needed for functionality built into the page, but is unable to initialize until a 404 is returned on the external tracking pixel. (Sometimes up to 30 seconds later) I have spent some time attempting to research how other firms deal with this sort of instability with third-party components, but have come up a bit short. The plan currently is to implement our own stop-gap method to deal with these external outages, but rather than reinventing the wheel, we wanted to find out how this is dealt with on other sites. Question Is there a good set of guidelines that should be followed when implementing third-party tracking pixels? I would love to see some white papers or other written documents about how other people have dealt with this issue.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >