public final class MoreLikeThis
extends java.lang.Object
Lucene does let you access the document frequency of terms, with IndexReader.docFreq().
Term frequencies can be computed by re-tokenizing the text, which, for a single document,
is usually fast enough. But looking up the docFreq() of every term in the document is
probably too slow.
You can use some heuristics to prune the set of terms, to avoid calling docFreq() too much,
or at all. Since you're trying to maximize a tf*idf score, you're probably most interested
in terms with a high tf. Choosing a tf threshold even as low as two or three will radically
reduce the number of terms under consideration. Another heuristic is that terms with a
high idf (i.e., a low df) tend to be longer. So you could threshold the terms by the
number of characters, not selecting anything less than, e.g., six or seven characters.
With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms
that do a pretty good job of characterizing a document.
It all depends on what you're trying to do. If you're trying to eek out that last percent
of precision and recall regardless of computational difficulty so that you can win a TREC
competition, then the techniques I mention above are useless. But if you're trying to
provide a "more like this" button on a search results page that does a decent job and has
good performance, such techniques might be useful.
An efficient, effective "more-like-this" query generator would be a great contribution, if
anyone's interested. I'd imagine that it would take a Reader or a String (the document's
text), analyzer Analyzer, and return a set of representative terms using heuristics like those
above. The frequency and length thresholds could be parameters, etc.
Doug
This class has lots of options to try to make it efficient and flexible.
The simplest possible usage is as follows. The bold
fragment is specific to this class.
IndexReader ir = ... IndexSearcher is = ... MoreLikeThis mlt = new MoreLikeThis(ir); Reader target = ... // orig source of doc you want to find similarities to Query query = mlt.like( target); Hits hits = is.search(query); // now the usual iteration thru 'hits' - the only thing to watch for is to make sure //you ignore the doc if it matches your 'target' document, as it should be similar to itself
Thus you:
You may want to use setFieldNames(...)
so you can examine
multiple fields (e.g. body and title) for similarity.
Depending on the size of your index and the size and makeup of your documents you may want to call the other set methods to control how the similarity queries are generated:
setMinTermFreq(...)
setMinDocFreq(...)
setMaxDocFreq(...)
setMaxDocFreqPct(...)
setMinWordLen(...)
setMaxWordLen(...)
setMaxQueryTerms(...)
setMaxNumTokensParsed(...)
setStopWord(...)
Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation. - bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code - bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector) - refactor: moved common code into isNoiseWord() - optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
Modifier and Type | Class and Description |
---|---|
private static class |
MoreLikeThis.FreqQ
PriorityQueue that orders words by score.
|
private static class |
MoreLikeThis.Int
Use for frequencies and to avoid renewing Integers.
|
private static class |
MoreLikeThis.ScoreTerm |
Modifier and Type | Field and Description |
---|---|
private Analyzer |
analyzer
Analyzer that will be used to parse the doc.
|
private boolean |
boost
Should we apply a boost to the Query based on the scores?
|
private float |
boostFactor
Boost factor to use when boosting the terms
|
static boolean |
DEFAULT_BOOST
Boost terms in query based on score.
|
static java.lang.String[] |
DEFAULT_FIELD_NAMES
Default field names.
|
static int |
DEFAULT_MAX_DOC_FREQ
Ignore words which occur in more than this many docs.
|
static int |
DEFAULT_MAX_NUM_TOKENS_PARSED
Default maximum number of tokens to parse in each example doc field that is not stored with TermVector support.
|
static int |
DEFAULT_MAX_QUERY_TERMS
Return a Query with no more than this many terms.
|
static int |
DEFAULT_MAX_WORD_LENGTH
Ignore words greater than this length or if 0 then this has no effect.
|
static int |
DEFAULT_MIN_DOC_FREQ
Ignore words which do not occur in at least this many docs.
|
static int |
DEFAULT_MIN_TERM_FREQ
Ignore terms with less than this frequency in the source doc.
|
static int |
DEFAULT_MIN_WORD_LENGTH
Ignore words less than this length or if 0 then this has no effect.
|
static java.util.Set<?> |
DEFAULT_STOP_WORDS
Default set of stopwords.
|
private java.lang.String[] |
fieldNames
Field name we'll analyze.
|
private IndexReader |
ir
IndexReader to use
|
private int |
maxDocFreq
Ignore words which occur in more than this many docs.
|
private int |
maxNumTokensParsed
The maximum number of tokens to parse in each example doc field that is not stored with TermVector support
|
private int |
maxQueryTerms
Don't return a query longer than this.
|
private int |
maxWordLen
Ignore words if greater than this len.
|
private int |
minDocFreq
Ignore words which do not occur in at least this many docs.
|
private int |
minTermFreq
Ignore words less frequent that this.
|
private int |
minWordLen
Ignore words if less than this len.
|
private TFIDFSimilarity |
similarity
For idf() calculations.
|
private java.util.Set<?> |
stopWords
Current set of stop words.
|
Constructor and Description |
---|
MoreLikeThis(IndexReader ir)
Constructor requiring an IndexReader.
|
MoreLikeThis(IndexReader ir,
TFIDFSimilarity sim) |
Modifier and Type | Method and Description |
---|---|
private void |
addTermFrequencies(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> field2termFreqMap,
Terms vector,
java.lang.String fieldName)
Adds terms and frequencies found in vector into the Map termFreqMap
|
private void |
addTermFrequencies(java.io.Reader r,
java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies,
java.lang.String fieldName)
Adds term frequencies found by tokenizing text from reader into the Map words
|
private Query |
createQuery(PriorityQueue<MoreLikeThis.ScoreTerm> q)
Create the More like query from a PriorityQueue
|
private PriorityQueue<MoreLikeThis.ScoreTerm> |
createQueue(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies)
Create a PriorityQueue from a word->tf map.
|
java.lang.String |
describeParams()
Describe the parameters that control how the "more like this" query is formed.
|
Analyzer |
getAnalyzer()
Returns an analyzer that will be used to parse source doc with.
|
float |
getBoostFactor()
Returns the boost factor used when boosting terms
|
java.lang.String[] |
getFieldNames()
Returns the field names that will be used when generating the 'More Like This' query.
|
int |
getMaxDocFreq()
Returns the maximum frequency in which words may still appear.
|
int |
getMaxNumTokensParsed() |
int |
getMaxQueryTerms()
Returns the maximum number of query terms that will be included in any generated query.
|
int |
getMaxWordLen()
Returns the maximum word length above which words will be ignored.
|
int |
getMinDocFreq()
Returns the frequency at which words will be ignored which do not occur in at least this
many docs.
|
int |
getMinTermFreq()
Returns the frequency below which terms will be ignored in the source doc.
|
int |
getMinWordLen()
Returns the minimum word length below which words will be ignored.
|
TFIDFSimilarity |
getSimilarity() |
java.util.Set<?> |
getStopWords()
Get the current stop words being used.
|
private int |
getTermsCount(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies) |
boolean |
isBoost()
Returns whether to boost terms in query based on "score" or not.
|
private boolean |
isNoiseWord(java.lang.String term)
determines if the passed term is likely to be of interest in "more like" comparisons
|
Query |
like(int docNum)
Return a query that will return docs like the passed lucene document ID.
|
Query |
like(java.util.Map<java.lang.String,java.util.Collection<java.lang.Object>> filteredDocument) |
Query |
like(java.lang.String fieldName,
java.io.Reader... readers)
Return a query that will return docs like the passed Readers.
|
java.lang.String[] |
retrieveInterestingTerms(int docNum) |
java.lang.String[] |
retrieveInterestingTerms(java.io.Reader r,
java.lang.String fieldName)
Convenience routine to make it easy to return the most interesting words in a document.
|
private PriorityQueue<MoreLikeThis.ScoreTerm> |
retrieveTerms(int docNum)
Find words for a more-like-this query former.
|
private PriorityQueue<MoreLikeThis.ScoreTerm> |
retrieveTerms(java.util.Map<java.lang.String,java.util.Collection<java.lang.Object>> field2fieldValues) |
private PriorityQueue<MoreLikeThis.ScoreTerm> |
retrieveTerms(java.io.Reader r,
java.lang.String fieldName)
Find words for a more-like-this query former.
|
void |
setAnalyzer(Analyzer analyzer)
Sets the analyzer to use.
|
void |
setBoost(boolean boost)
Sets whether to boost terms in query based on "score" or not.
|
void |
setBoostFactor(float boostFactor)
Sets the boost factor to use when boosting terms
|
void |
setFieldNames(java.lang.String[] fieldNames)
Sets the field names that will be used when generating the 'More Like This' query.
|
void |
setMaxDocFreq(int maxFreq)
Set the maximum frequency in which words may still appear.
|
void |
setMaxDocFreqPct(int maxPercentage)
Set the maximum percentage in which words may still appear.
|
void |
setMaxNumTokensParsed(int i) |
void |
setMaxQueryTerms(int maxQueryTerms)
Sets the maximum number of query terms that will be included in any generated query.
|
void |
setMaxWordLen(int maxWordLen)
Sets the maximum word length above which words will be ignored.
|
void |
setMinDocFreq(int minDocFreq)
Sets the frequency at which words will be ignored which do not occur in at least this
many docs.
|
void |
setMinTermFreq(int minTermFreq)
Sets the frequency below which terms will be ignored in the source doc.
|
void |
setMinWordLen(int minWordLen)
Sets the minimum word length below which words will be ignored.
|
void |
setSimilarity(TFIDFSimilarity similarity) |
void |
setStopWords(java.util.Set<?> stopWords)
Set the set of stopwords.
|
public static final int DEFAULT_MAX_NUM_TOKENS_PARSED
getMaxNumTokensParsed()
,
Constant Field Valuespublic static final int DEFAULT_MIN_TERM_FREQ
getMinTermFreq()
,
setMinTermFreq(int)
,
Constant Field Valuespublic static final int DEFAULT_MIN_DOC_FREQ
getMinDocFreq()
,
setMinDocFreq(int)
,
Constant Field Valuespublic static final int DEFAULT_MAX_DOC_FREQ
public static final boolean DEFAULT_BOOST
isBoost()
,
setBoost(boolean)
,
Constant Field Valuespublic static final java.lang.String[] DEFAULT_FIELD_NAMES
public static final int DEFAULT_MIN_WORD_LENGTH
getMinWordLen()
,
setMinWordLen(int)
,
Constant Field Valuespublic static final int DEFAULT_MAX_WORD_LENGTH
getMaxWordLen()
,
setMaxWordLen(int)
,
Constant Field Valuespublic static final java.util.Set<?> DEFAULT_STOP_WORDS
setStopWords(java.util.Set<?>)
,
getStopWords()
private java.util.Set<?> stopWords
public static final int DEFAULT_MAX_QUERY_TERMS
private Analyzer analyzer
private int minTermFreq
private int minDocFreq
private int maxDocFreq
private boolean boost
private java.lang.String[] fieldNames
private int maxNumTokensParsed
private int minWordLen
private int maxWordLen
private int maxQueryTerms
private TFIDFSimilarity similarity
private final IndexReader ir
private float boostFactor
public MoreLikeThis(IndexReader ir)
public MoreLikeThis(IndexReader ir, TFIDFSimilarity sim)
public float getBoostFactor()
setBoostFactor(float)
public void setBoostFactor(float boostFactor)
getBoostFactor()
public TFIDFSimilarity getSimilarity()
public void setSimilarity(TFIDFSimilarity similarity)
public Analyzer getAnalyzer()
public void setAnalyzer(Analyzer analyzer)
like(int)
method, all other 'like' methods require an analyzer.analyzer
- the analyzer to use to tokenize text.public int getMinTermFreq()
DEFAULT_MIN_TERM_FREQ
.public void setMinTermFreq(int minTermFreq)
minTermFreq
- the frequency below which terms will be ignored in the source doc.public int getMinDocFreq()
DEFAULT_MIN_DOC_FREQ
.public void setMinDocFreq(int minDocFreq)
minDocFreq
- the frequency at which words will be ignored which do not occur in at
least this many docs.public int getMaxDocFreq()
DEFAULT_MAX_DOC_FREQ
.public void setMaxDocFreq(int maxFreq)
maxFreq
- the maximum count of documents that a term may appear
in to be still considered relevantpublic void setMaxDocFreqPct(int maxPercentage)
setMaxDocFreq(int)
internally (both conditions cannot
be used at the same time).maxPercentage
- the maximum percentage of documents (0-100) that a term may appear
in to be still considered relevant.public boolean isBoost()
DEFAULT_BOOST
.setBoost(boolean)
public void setBoost(boolean boost)
boost
- true to boost terms in query based on "score", false otherwise.isBoost()
public java.lang.String[] getFieldNames()
DEFAULT_FIELD_NAMES
.public void setFieldNames(java.lang.String[] fieldNames)
fieldNames
- the field names that will be used when generating the 'More Like This'
query.public int getMinWordLen()
DEFAULT_MIN_WORD_LENGTH
.public void setMinWordLen(int minWordLen)
minWordLen
- the minimum word length below which words will be ignored.public int getMaxWordLen()
DEFAULT_MAX_WORD_LENGTH
.public void setMaxWordLen(int maxWordLen)
maxWordLen
- the maximum word length above which words will be ignored.public void setStopWords(java.util.Set<?> stopWords)
stopWords
- set of stopwords, if null it means to allow stop wordsgetStopWords()
public java.util.Set<?> getStopWords()
setStopWords(java.util.Set<?>)
public int getMaxQueryTerms()
DEFAULT_MAX_QUERY_TERMS
.public void setMaxQueryTerms(int maxQueryTerms)
maxQueryTerms
- the maximum number of query terms that will be included in any
generated query.public int getMaxNumTokensParsed()
DEFAULT_MAX_NUM_TOKENS_PARSED
public void setMaxNumTokensParsed(int i)
i
- The maximum number of tokens to parse in each example doc field that is not stored with TermVector supportpublic Query like(int docNum) throws java.io.IOException
docNum
- the documentID of the lucene doc to generate the 'More Like This" query for.java.io.IOException
public Query like(java.util.Map<java.lang.String,java.util.Collection<java.lang.Object>> filteredDocument) throws java.io.IOException
filteredDocument
- Document with field values extracted for selected fields.java.io.IOException
public Query like(java.lang.String fieldName, java.io.Reader... readers) throws java.io.IOException
java.io.IOException
private Query createQuery(PriorityQueue<MoreLikeThis.ScoreTerm> q)
private PriorityQueue<MoreLikeThis.ScoreTerm> createQueue(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies) throws java.io.IOException
perFieldTermFrequencies
- a per field map of words keyed on the word(String) with Int objects as the values.java.io.IOException
private int getTermsCount(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies)
public java.lang.String describeParams()
private PriorityQueue<MoreLikeThis.ScoreTerm> retrieveTerms(int docNum) throws java.io.IOException
docNum
- the id of the lucene document from which to find termsjava.io.IOException
private PriorityQueue<MoreLikeThis.ScoreTerm> retrieveTerms(java.util.Map<java.lang.String,java.util.Collection<java.lang.Object>> field2fieldValues) throws java.io.IOException
java.io.IOException
private void addTermFrequencies(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> field2termFreqMap, Terms vector, java.lang.String fieldName) throws java.io.IOException
field2termFreqMap
- a Map of terms and their frequencies per fieldvector
- List of terms and their frequencies for a doc/fieldjava.io.IOException
private void addTermFrequencies(java.io.Reader r, java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies, java.lang.String fieldName) throws java.io.IOException
r
- a source of text to be tokenizedperFieldTermFrequencies
- a Map of terms and their frequencies per fieldfieldName
- Used by analyzer for any special per-field analysisjava.io.IOException
private boolean isNoiseWord(java.lang.String term)
term
- The word being consideredprivate PriorityQueue<MoreLikeThis.ScoreTerm> retrieveTerms(java.io.Reader r, java.lang.String fieldName) throws java.io.IOException
retrieveInterestingTerms()
.r
- the reader that has the content of the documentfieldName
- field passed to the analyzer to use when analyzing the contentjava.io.IOException
retrieveInterestingTerms(int)
public java.lang.String[] retrieveInterestingTerms(int docNum) throws java.io.IOException
java.io.IOException
retrieveInterestingTerms(java.io.Reader, String)
public java.lang.String[] retrieveInterestingTerms(java.io.Reader r, java.lang.String fieldName) throws java.io.IOException
retrieveTerms()
directly.r
- the source documentfieldName
- field passed to analyzer to use when analyzing the contentjava.io.IOException
retrieveTerms(java.io.Reader, String)
,
setMaxQueryTerms(int)