org.apache.lucene.search.similar
public final class MoreLikeThis extends Object
Lucene does let you access the document frequency of terms, with IndexReader.docFreq().
Term frequencies can be computed by re-tokenizing the text, which, for a single document,
is usually fast enough. But looking up the docFreq() of every term in the document is
probably too slow.
You can use some heuristics to prune the set of terms, to avoid calling docFreq() too much,
or at all. Since you're trying to maximize a tf*idf score, you're probably most interested
in terms with a high tf. Choosing a tf threshold even as low as two or three will radically
reduce the number of terms under consideration. Another heuristic is that terms with a
high idf (i.e., a low df) tend to be longer. So you could threshold the terms by the
number of characters, not selecting anything less than, e.g., six or seven characters.
With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms
that do a pretty good job of characterizing a document.
It all depends on what you're trying to do. If you're trying to eek out that last percent
of precision and recall regardless of computational difficulty so that you can win a TREC
competition, then the techniques I mention above are useless. But if you're trying to
provide a "more like this" button on a search results page that does a decent job and has
good performance, such techniques might be useful.
An efficient, effective "more-like-this" query generator would be a great contribution, if
anyone's interested. I'd imagine that it would take a Reader or a String (the document's
text), analyzer Analyzer, and return a set of representative terms using heuristics like those
above. The frequency and length thresholds could be parameters, etc.
Doug
IndexReader ir = ...
IndexSearcher is = ...
MoreLikeThis mlt = new MoreLikeThis(ir);
Reader target = ... // orig source of doc you want to find similarities to
Query query = mlt.like( target);
Hits hits = is.search(query);
// now the usual iteration thru 'hits' - the only thing to watch for is to make sure
you ignore the doc if it matches your 'target' document, as it should be similar to itself
Thus you:
Depending on the size of your index and the size and makeup of your documents you may want to call the other set methods to control how the similarity queries are generated:
Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation. - bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code - bugfix: No significant terms being created for fields with a termvector - because was only counting one occurence per term/field pair in calculations(ie not including frequency info from TermVector) - refactor: moved common code into isNoiseWord() - optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
Field Summary | |
---|---|
static int | DEFALT_MIN_DOC_FREQ
Ignore words which do not occur in at least this many docs. |
static Analyzer | DEFAULT_ANALYZER
Default analyzer to parse source doc with. |
static boolean | DEFAULT_BOOST
Boost terms in query based on score. |
static String[] | DEFAULT_FIELD_NAMES
Default field names. |
static int | DEFAULT_MAX_NUM_TOKENS_PARSED
Default maximum number of tokens to parse in each example doc field that is not stored with TermVector support. |
static int | DEFAULT_MAX_QUERY_TERMS
Return a Query with no more than this many terms.
|
static int | DEFAULT_MAX_WORD_LENGTH
Ignore words greater than this length or if 0 then this has no effect. |
static int | DEFAULT_MIN_TERM_FREQ
Ignore terms with less than this frequency in the source doc. |
static int | DEFAULT_MIN_WORD_LENGTH
Ignore words less than this length or if 0 then this has no effect. |
static Set | DEFAULT_STOP_WORDS
Default set of stopwords.
|
Constructor Summary | |
---|---|
MoreLikeThis(IndexReader ir)
Constructor requiring an IndexReader. |
Method Summary | |
---|---|
String | describeParams()
Describe the parameters that control how the "more like this" query is formed. |
Analyzer | getAnalyzer()
Returns an analyzer that will be used to parse source doc with. |
String[] | getFieldNames()
Returns the field names that will be used when generating the 'More Like This' query.
|
int | getMaxNumTokensParsed() |
int | getMaxQueryTerms()
Returns the maximum number of query terms that will be included in any generated query.
|
int | getMaxWordLen()
Returns the maximum word length above which words will be ignored. |
int | getMinDocFreq()
Returns the frequency at which words will be ignored which do not occur in at least this
many docs. |
int | getMinTermFreq()
Returns the frequency below which terms will be ignored in the source doc. |
int | getMinWordLen()
Returns the minimum word length below which words will be ignored. |
Set | getStopWords()
Get the current stop words being used. |
boolean | isBoost()
Returns whether to boost terms in query based on "score" or not. |
Query | like(int docNum)
Return a query that will return docs like the passed lucene document ID.
|
Query | like(File f)
Return a query that will return docs like the passed file.
|
Query | like(URL u)
Return a query that will return docs like the passed URL.
|
Query | like(InputStream is)
Return a query that will return docs like the passed stream.
|
Query | like(Reader r)
Return a query that will return docs like the passed Reader.
|
static void | main(String[] a)
Test driver.
|
String[] | retrieveInterestingTerms(Reader r)
Convenience routine to make it easy to return the most interesting words in a document.
|
PriorityQueue | retrieveTerms(Reader r)
Find words for a more-like-this query former.
|
void | setAnalyzer(Analyzer analyzer)
Sets the analyzer to use. |
void | setBoost(boolean boost)
Sets whether to boost terms in query based on "score" or not.
|
void | setFieldNames(String[] fieldNames)
Sets the field names that will be used when generating the 'More Like This' query.
|
void | setMaxNumTokensParsed(int i) |
void | setMaxQueryTerms(int maxQueryTerms)
Sets the maximum number of query terms that will be included in any generated query.
|
void | setMaxWordLen(int maxWordLen)
Sets the maximum word length above which words will be ignored.
|
void | setMinDocFreq(int minDocFreq)
Sets the frequency at which words will be ignored which do not occur in at least this
many docs.
|
void | setMinTermFreq(int minTermFreq)
Sets the frequency below which terms will be ignored in the source doc.
|
void | setMinWordLen(int minWordLen)
Sets the minimum word length below which words will be ignored.
|
void | setStopWords(Set stopWords)
Set the set of stopwords.
|
See Also: MoreLikeThis MoreLikeThis
See Also: MoreLikeThis
See Also: MoreLikeThis MoreLikeThis
See Also: MoreLikeThis
See Also: BooleanQuery MoreLikeThis MoreLikeThis
See Also: MoreLikeThis MoreLikeThis
See Also: MoreLikeThis MoreLikeThis
See Also: MoreLikeThis MoreLikeThis
See Also: MoreLikeThis MoreLikeThis
Returns: the analyzer that will be used to parse source doc with.
See Also: DEFAULT_ANALYZER
Returns: the field names that will be used when generating the 'More Like This' query.
Returns: The maximum number of tokens to parse in each example doc field that is not stored with TermVector support
See Also: DEFAULT_MAX_NUM_TOKENS_PARSED
Returns: the maximum number of query terms that will be included in any generated query.
Returns: the maximum word length above which words will be ignored.
Returns: the frequency at which words will be ignored which do not occur in at least this many docs.
Returns: the frequency below which terms will be ignored in the source doc.
Returns: the minimum word length below which words will be ignored.
See Also: MoreLikeThis
Returns: whether to boost terms in query based on "score" or not.
See Also: MoreLikeThis
Parameters: docNum the documentID of the lucene doc to generate the 'More Like This" query for.
Returns: a query that will return docs like the passed lucene document ID.
Returns: a query that will return docs like the passed file.
Returns: a query that will return docs like the passed URL.
Returns: a query that will return docs like the passed stream.
Returns: a query that will return docs like the passed Reader.
Parameters: r the source document
Returns: the most interesting words in the document
See Also: retrieveTerms MoreLikeThis
Parameters: r the reader that has the content of the document
Returns: the most intresting words in the document ordered by score, with the highest scoring, or best entry, first
See Also: MoreLikeThis
Parameters: analyzer the analyzer to use to tokenize text.
Parameters: boost true to boost terms in query based on "score", false otherwise.
See Also: MoreLikeThis
Parameters: fieldNames the field names that will be used when generating the 'More Like This' query.
Parameters: i The maximum number of tokens to parse in each example doc field that is not stored with TermVector support
Parameters: maxQueryTerms the maximum number of query terms that will be included in any generated query.
Parameters: maxWordLen the maximum word length above which words will be ignored.
Parameters: minDocFreq the frequency at which words will be ignored which do not occur in at least this many docs.
Parameters: minTermFreq the frequency below which terms will be ignored in the source doc.
Parameters: minWordLen the minimum word length below which words will be ignored.
Parameters: stopWords set of stopwords, if null it means to allow stop words
See Also: StopFilter.makeStopSet()
MoreLikeThis