Search Results

Search found 284 results on 12 pages for 'solr'.

Page 8/12 | < Previous Page | 4 5 6 7 8 9 10 11 12  | Next Page >

  • Deleting index from Solr using solrj as a client

    - by Azhar
    Hello, I am using solrj as client for indexing documents on the solr server. I am having problem while deleting the indexes by 'id' from the solr server. I am using following code to delete the indexes: server.deleteById("id:20"); server.commit(true,true); After this when i again search for the documents, the search result contains the above document also. Dont know what is going wrong with this code. Please help me out with issue. Thanks!

    Read the article

  • Distributed Lucene.NET

    - by user72185
    Hi, I have a Terabyte of data, maybe more, which I'd like to index and search with Lucene. I'd like to be able to split the index out to different machines, similar to what Solr does (if I understand Solr correctly). Are there any existing tools to do this on the Windows platform? Thanks!

    Read the article

  • What is a good sample solrconfig.xml for django-haystack?

    - by Danner
    I am building out a solr instance for django, but the example provided from solr is super verbose, with many things that are not relevant to haystack. A sample with spelling suggestions, morelikethis, and faceting, without the extra stuff that haystack doesn't use would go a long way to helping me understand what is needed and what isn't.

    Read the article

  • Lucene based database search engine

    - by Abhay
    Hi All, I am planing to add search feature in my web application. Now the items that will be searched are strored in a Relational database. In order to achieve a full text search engine I have following doubts : For database based search engine should I use just lucene or some other utility based on Luncene like Solr, luSql, Compass etc. In case of Solr, can it be embeded in to the web applcation rather than deploying it as a seaparate WAR. Thanks for your time. regards

    Read the article

  • Can I run a site search like Lucene on a single 2 gig server that's also a web & mysql server

    - by ian.evans
    My site's pages have exceeded the limit of pages for Google Custom Search so many of the results are not found in our site search. I've been reading about Lucene, Nutch, Solr, etc and I'm wondering if I'd have the requirements for running those on a single server that also runs the site (on nginx) and our mysql server. We hae 2 gigs of RAM. I'd appreciate any suggestions for migrating to a new site search.

    Read the article

  • How do we increase the maximum allowed HTTP GET query length in Jetty?

    - by Mike
    We are using Jetty to run an Apache Solr index. We've had some queries that have grown way beyond the previously expected maximum length, and are now having issues wehre most queries are not returning any data because the URL gets truncated. These requests are not being made through a browser, they're being made programmatically using the Apache_Solr_Service PHP library. The application is expecting queries to come in as HTTP GET requests, so simply switching to a POST will not solve this problem. How can we increase the maximum allowed HTTP GET query length in Jetty? Thanks!

    Read the article

  • Lucene search on specific field name?

    - by Rachel
    I have been playing around with an installation of SOLR that indexes some data from my database. I am able to index data and query it back but I was wondering about how field name queries work. For certain fields I am able to specify their name and the search text to have the results return as expected and for other fields, when I specify their name and search text, no results are returned. q=type:book //(this will work) q=type:book AND title:"The Title" //(no results returned) In this example, type is a required field and title is not. For the example where I search by title, I can see the document in the results of the first query having the given title so I know that a document exists that matches this search. Is making a field 'required' the only way to be able to search by field name? [edit] I'm using the default installation and the 'example' folder inside of solr, editing the xml files and using the interface available through start.jar to be able to run, index and query.

    Read the article

  • What is the --daemon option?

    - by Pascal Dimassimo
    I was installing Solr with Jetty using these instructions. Basically, those instructions made you download the Jetty startup script and copy it to /etc/init.d/jetty. But it was not working. Each time I was starting Jetty, I had a "FAILED" message and nothing to understand why it was happening. I decided to open up the /etc/init.d/jetty script to understand what was happening. I saw that this script was using start-stop-daemon to launch jetty. After a couple of time of debugging, I discovered that removing the --daemon option at the end of the start-stop-daemon call was fixing my problem. I did a couple of research and discovered that this guy had the same problem and resolved it like I did: my removing the --daemon option. What is weird is that the switch does not seem to be specific to start-stop-daemon, because it is not documented in the man page. Also, I've seen it used for other commands. So what is that --daemon option doing? And why removing it resolved my problem? Note that I am working on Ubuntu 10.04.2 LTS.

    Read the article

  • No optimization causes wrong search result

    - by KailZhang
    I just took over our solr/lucene stuff from my ex-colleague. But there is a weird bug. If there is no optimization after dataimport, actually if there are multiple segment files, the search result then will be wrong. We are using a customized solr searchComponent. As far as I know about lucene, optimization is only optimization which could improve the speed of indexing and should not affect search result. I doubt this may be related to multithreading or unclosed searcher/reader or something. Anybody can help? Thank you.

    Read the article

  • How to setup Lucene search for a B2B web app?

    - by Bill Paetzke
    Given: 5000 databases (spread out over 5 servers) 1 database per client (so you can infer there are 1000 clients) 2 to 2000 users per client (let's say avg is 100 users per client) Clients (databases) come and go every day (let's assume most remain for at least one year) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. His situation is quite similar to mine. Although, he didn't elaborate on the setup (particularly indices); hence, the need for this question. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • Solrnet /ASP.NET sample without MVC

    - by Mikos
    I am trying to get a handle on Solrnet and interacting an ASP.NET site with a Solr server. However, the sample app (on the code repository) is MVC based ,does anyone know of a version in plain vanilla ASP.NET? Thanks

    Read the article

  • Hosted full text search solutions?

    - by James Cooper
    Does anyone know of companies offering SaaS full text search? I'm looking for something that uses Lucene, solr, or sphinx on the backend, and provides a REST API for submitting documents to index, and running searches. I could build my own EC2 AMI, but I'd have to configure EBS and other stuff, monitor it, etc. Curious if someone has already done all this and would charge per MB/GB indexed. thank you. -- James

    Read the article

  • Error with varchar(max) column when using net.sourceforge.jtds.jdbc.Driver

    - by Rihan Meij
    Hi I have a MS SQL database running (MS SQL 2005) and am connecting to it via the net.sourceforge.jtds.jdbc.Driver. The query works fine for all the columns except one that is a varchar(max). Any ideas how to get around this issues? I am using the jdbc driver to run a data index into a SOLR implementation. (I do not control the database, so the first prize solution would be where I can tweak the SQL command to get the desired results) Thanks

    Read the article

  • Cfsearch in combination of documents and indexed query data?

    - by Bart B
    hi! I have an application which stores all kind of data about people. The current cfsearch functionality (in Verity) includes searching documents that are attached to these people. If i have 2 documents attached to 1 person, 1 with say ABC in it and the other with XYZ in it, my ideal searchresult for "ABC AND XYZ" would return the 1 person. But as both 'words' are indexed in different documents, the standard behaviour is not to return any result from the cfsearch, because the combination doesnt exist in any of the 2 documents. Is there any way to combine indexed documents and/or query data in a way that the search is executed in the combination of relevant docs and data? In my application that would mean that i could index all documents and data regarding people and have an intelligent 'global' search to find the right person. any pointers and help very much appreciated! (should Solr offer new possibilities in comparison to Verity, no problem!) thanks! Bart

    Read the article

  • Which metadata I should save when downloading web-pages?

    - by Vojtech R.
    Hi, I'm going to download (for future purposes of language processing) some thousands webpages. Now I'm thinking, which metadata I should save. I explore this, but I do not wont to neglect something important. <title> <link> <publish_date> <date_downloaded> <source> // to this page <keyword> // for Solr indexing <text> // cleaned body of page Is there something important what I could miss in future?

    Read the article

  • How to use NGramTokenizerFactory or NGramFilterFactory?

    - by user572485
    Hi, Recently, I am studying how to store and index using Solr. I want to do facet.prefix search. With whitespace tokenizer, "Where are you" will be splited into three words and indexed. If I search facet.prefix="where are", no result will be returned. I google and found NGramFilterFactory can help me. But when I apply this filter factory, I found the result is "w, h, e, ..., wh, ..", which split the sentence by character, not by token word. I use the parameters maxGramSize and minGramSize, set to 1 and 3. Does the NGramFilterFactory work right? Should I add some other parameters? Is there some other filter factories which can help me? Thanks!

    Read the article

  • Knowledge mining using Hadoop.

    - by Anurag
    Hello there, I want to do a project Hadoop and map reduce and present it as my graduation project. To this, I've given some thought,searched over the internet and came up with the idea of implementing some basic knowledge mining algorithms say on a social websites like Facebook or may stckoverflow, Quora etc and draw some statistical graphs, comparisons frequency distributions and other sort of important values.For searching purpose would it be wise to use Apache Solr ? I want know If such thing is feasible using the above mentioned tools, if so how should I build up on this little idea? Where can I learn about knowledge mining algorithms which are easy to implement using java and map reduce techniques? In case this is a wrong idea please suggest what else can otherwise be done on using Hadoop and other related sub-projects? Thank you

    Read the article

  • Best full text search for mysql?

    - by ConroyP
    We're currently running MySQL on a LAMP stack and have been looking at implementing a more thorough, full-text search on our site. We've looked at MySQL's own freetext search, but it doesn't seem to cope well with large databases, which makes it far too slow for our needs. Our main requirements are: speed returning results simple updating of index In addition to the above, our "nice to have"s are: ideally not something that requires adding a module to MySQL plays nicely with PHP (majority of our dev work done using PHP) There seems to be quite a few healthy open-source projects to add fast, reliable full-text search to MySQL, so I'm basically looking for recommendations/suggestions on what you've found to be the most useful product out there, easiest to set up, etc. So far, the list of ones we've been starting to play around with are: Sphinx, C++ based, used by craigslist, thepiratebay Lucene, Java-based Apache project, powers zeoh.com and zoomf.com Solr, Java-based offshoot of Lucene, used to power searches on Digg, CNet & AOL Channels Are there any better ones out there that we haven't come across yet? Can you recommend / suggest against any of the options we've gathered so far? Thanks for your help! Update @Cletus suggested Google's Custom Search Engine. We recently trialled this on a couple of projects, and it's an almost-perfect fit for our needs. The problem is that entries on our site are updated quite regularly, and unfortunately the speed at which entries go in/get updated in Google's index was just too slow and erratic for us to rely on, even with the addition of sitemaps and requested crawl rate changes.

    Read the article

  • acts_as_solr isn't updating associated models in Rails

    - by Trey Bean
    I'm using acts_as_solr for searching in a project. Unfortunately, the index doesn't seem to be updated for the associated models when a model is saved. Example: I have three models: class Merchant < ActiveRecord::Base acts_as_solr :fields => [:name, :domain, :description], :include => [:coupons, :tags] ... end class Coupon < ActiveRecord::Base acts_as_solr :fields => [:store_name, :url, :code, :description] ... end class Tag < ActiveRecord::Base acts_as_solr :fields => [:name] ... end I use the following line to perform a search: Merchant.paginate_by_solr(params[:q], :per_page => PER_PAGE, :page => [(params[:page] || 1).to_i, 1].max) For some reason though, after I add a coupon that contains the word 'shoes' in the description, a query for 'shoes' doesn't return the merchant associated with the coupon. The association all work and if I run rake solr:reindex, the search then returns the new coupon. Do I need to update the index for Merchant each time a new coupon is created? Do I have to update the index for the whole class or can I just update the associated merchant? Shouldn't this be done automatically? Thanks for any input.

    Read the article

  • How to code a 'Next in Results' within search results in PHP

    - by thebluefox
    Right, bit of a head scratcher, although I've got a feeling there's an obvious answer and I'm just not seeing the wood for the trees. Baiscally, using Solr as a search engine for my site, bringing back 15 results per page. When you click on a result, you get a detail page, that has a "Next in Results" link on it, which obviously forwards you on to the next result. Whats the best way of doing this? I've come up with a few solutions but they're either too inpractical or just don't work. I could store all the ids in a session array, then grab the one after the current one and put that in the link. But with possibly hundreds/thousands of results, the memory that array would need, and the performance hit of dealing with it isn't practical. I could take the same approach and put it into the db, but I'll still have to deal with a potentially huge array when I grab them out of the db. Or; I could do the search again, only returning the id's, and grab the one after the one we're currently looking at. I think this could be the best option? Although it does seem kind of messy, namely because of when I have to select the id thats on a different 'page' (ie the 16th, 31st etc result). Unless I pass through where it was in the results, and select from there, but that still doesn't seem like the right way to do it. I'm really sorry if this is just complete nonsense, any help is massively appreciated as always, Cheers guys!

    Read the article

  • acts_as_solr returns all rows in the database when using the model as search query

    - by chris Chan
    In our application we're using acts_as_solr for search. Everything seems to be running smoothly except for the fact that using the model name as the search query returns every single row in the table. For example, let's say we have a users table. We specify acts_as_solr in our model to search the fields first name, last name and handle acts_as_solr :fields = [:handle, :lname, :fname]. When you use "user" as the search term it returns every single user in the system, or every row in the database as a result. Has anyone else run into this?

    Read the article

  • Indexing a method return (depending on Internationalization)

    - by Hedde
    Consider a django model with an IntegerField with some choices, e.g. COLORS = ( (0, _(u"Blue"), (1, _(u"Red"), (2, _(u"Yellow"), ) class Foo(models.Model): # ...other fields... color = models.PositiveIntegerField(choices=COLOR, verbose_name=_(u"color")) My current (haystack) index: class FooIndex(SearchIndex): text = CharField(document=True, use_template=True) color = CharField(model_attr='color') def prepare_color(self, obj): return obj.get_color_display() site.register(Product, ProductIndex) This obviously only works for keyword "yellow", but not for any (available) translations. Question: What's would be a good way to solve this problem? (indexing method returns based on the active language) What I have tried: I created a function that runs a loop over every available language (from settings) appending any translation to a list, evaluating this against the query, pre search. If any colors are matched it converts them backwards into their numeric representation to evaluate against obj.color, but this feels wrong.

    Read the article

  • Configure Iptables to allow a PHP-app accessing a port-nr

    - by Camran
    I have a php-application which connects to another app called Solr (database search engine). I can via this php app add/remove documents (records) from the Solr index. However, the Solr security is low, and anybody with the right port nr can access Solr and remove documents (records). I wonder, is it possible to ONLY allow my own php-app to have access to Solr somehow? Prefferably via Iptables. I am thinking I can only allow my own servers IP to that port, and it would solve my problem, because PHP is a server-side code. But I am not sure. About the Php-app: The website is a classifieds website, and when users wants to add or remove classifieds, they do so through a php app, which is this one. The app has a function which connects to solr and updates the database (index). I appreciate detailed answers... Thanks

    Read the article

  • Ubuntu server; Backup of server and MySql database, and Solr database

    - by Camran
    How is backup done on ubuntu servers? I have a server (Ubuntu 9.10) which has apache2 installed, php5, mysql etc... The website is a classifieds website where all classifieds are stored in mysql and Solr. I need to backup this server with all information to be able to fully restore it if something goes wrong. How should I start? Is it an automated task, or will I do backups manually? (prefer manually) Thanks

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12  | Next Page >