Package | Description |
---|---|
org.apache.lucene.analysis |
Text analysis.
|
org.apache.lucene.analysis.ar |
Analyzer for Arabic.
|
org.apache.lucene.analysis.bg |
Analyzer for Bulgarian.
|
org.apache.lucene.analysis.br |
Analyzer for Brazilian Portuguese.
|
org.apache.lucene.analysis.cjk |
Analyzer for Chinese, Japanese, and Korean, which indexes bigrams.
|
org.apache.lucene.analysis.ckb |
Analyzer for Sorani Kurdish.
|
org.apache.lucene.analysis.cn.smart |
Analyzer for Simplified Chinese, which indexes words.
|
org.apache.lucene.analysis.commongrams |
Construct n-grams for frequently occurring terms and phrases.
|
org.apache.lucene.analysis.compound |
A filter that decomposes compound words you find in many Germanic
languages into the word parts.
|
org.apache.lucene.analysis.core |
Basic, general-purpose analysis components.
|
org.apache.lucene.analysis.cz |
Analyzer for Czech.
|
org.apache.lucene.analysis.de |
Analyzer for German.
|
org.apache.lucene.analysis.el |
Analyzer for Greek.
|
org.apache.lucene.analysis.en |
Analyzer for English.
|
org.apache.lucene.analysis.es |
Analyzer for Spanish.
|
org.apache.lucene.analysis.fa |
Analyzer for Persian.
|
org.apache.lucene.analysis.fi |
Analyzer for Finnish.
|
org.apache.lucene.analysis.fr |
Analyzer for French.
|
org.apache.lucene.analysis.ga |
Analyzer for Irish.
|
org.apache.lucene.analysis.gl |
Analyzer for Galician.
|
org.apache.lucene.analysis.hi |
Analyzer for Hindi.
|
org.apache.lucene.analysis.hu |
Analyzer for Hungarian.
|
org.apache.lucene.analysis.hunspell |
Stemming TokenFilter using a Java implementation of the
Hunspell stemming algorithm.
|
org.apache.lucene.analysis.icu |
Analysis components based on ICU
|
org.apache.lucene.analysis.icu.segmentation |
Tokenizer that breaks text into words with the Unicode Text Segmentation algorithm.
|
org.apache.lucene.analysis.id |
Analyzer for Indonesian.
|
org.apache.lucene.analysis.in |
Analyzer for Indian languages.
|
org.apache.lucene.analysis.it |
Analyzer for Italian.
|
org.apache.lucene.analysis.ja |
Analyzer for Japanese.
|
org.apache.lucene.analysis.lv |
Analyzer for Latvian.
|
org.apache.lucene.analysis.miscellaneous |
Miscellaneous Tokenstreams.
|
org.apache.lucene.analysis.morfologik |
This package provides dictionary-driven lemmatization ("accurate stemming")
filter and analyzer for the Polish Language, driven by the
Morfologik library developed
by Dawid Weiss and Marcin MiĆkowski.
|
org.apache.lucene.analysis.ngram |
Character n-gram tokenizers and filters.
|
org.apache.lucene.analysis.no |
Analyzer for Norwegian.
|
org.apache.lucene.analysis.path |
Analysis components for path-like strings such as filenames.
|
org.apache.lucene.analysis.pattern |
Set of components for pattern-based (regex) analysis.
|
org.apache.lucene.analysis.payloads |
Provides various convenience classes for creating payloads on Tokens.
|
org.apache.lucene.analysis.phonetic |
Analysis components for phonetic search.
|
org.apache.lucene.analysis.pt |
Analyzer for Portuguese.
|
org.apache.lucene.analysis.reverse |
Filter to reverse token text.
|
org.apache.lucene.analysis.ru |
Analyzer for Russian.
|
org.apache.lucene.analysis.shingle |
Word n-gram filters.
|
org.apache.lucene.analysis.sinks |
TeeSinkTokenFilter and implementations
of TeeSinkTokenFilter.SinkFilter that
might be useful. |
org.apache.lucene.analysis.snowball |
TokenFilter and Analyzer implementations that use Snowball
stemmers. |
org.apache.lucene.analysis.sr |
Analyzer for Serbian.
|
org.apache.lucene.analysis.standard |
Fast, general-purpose grammar-based tokenizers.
|
org.apache.lucene.analysis.standard.std40 |
Backwards-compatible implementation to match
Version.LUCENE_4_0 |
org.apache.lucene.analysis.stempel |
Stempel: Algorithmic Stemmer
|
org.apache.lucene.analysis.sv |
Analyzer for Swedish.
|
org.apache.lucene.analysis.synonym |
Analysis components for Synonyms.
|
org.apache.lucene.analysis.th |
Analyzer for Thai.
|
org.apache.lucene.analysis.tr |
Analyzer for Turkish.
|
org.apache.lucene.analysis.uima |
Classes that integrate UIMA with Lucene's analysis API.
|
org.apache.lucene.analysis.util |
Utility functions for text analysis.
|
org.apache.lucene.analysis.wikipedia |
Tokenizer that is aware of Wikipedia syntax.
|
org.apache.lucene.codecs |
Codecs API: API for customization of the encoding and structure of the index.
|
org.apache.lucene.document |
The logical representation of a
Document for indexing and searching. |
org.apache.lucene.index |
Code to maintain and access indices.
|
org.apache.lucene.index.memory |
High-performance single-document main memory Apache Lucene fulltext search index.
|
org.apache.lucene.search |
Code to search indices.
|
org.apache.lucene.search.highlight |
Highlighting search terms.
|
org.apache.lucene.search.suggest.analyzing |
Analyzer based autosuggest.
|
org.apache.lucene.search.suggest.document |
Support for document suggestion
|
Modifier and Type | Class and Description |
---|---|
class |
CachingTokenFilter
This class can be used if the token attributes of a TokenStream
are intended to be consumed more than once.
|
class |
CannedBinaryTokenStream
TokenStream from a canned list of binary (BytesRef-based)
tokens.
|
class |
CannedTokenStream
TokenStream from a canned list of Tokens.
|
class |
CrankyTokenFilter
Throws IOException from random Tokenstream methods.
|
class |
LookaheadTokenFilter<T extends LookaheadTokenFilter.Position>
An abstract TokenFilter to make it easier to build graph
token filters requiring some lookahead.
|
class |
MockFixedLengthPayloadFilter
TokenFilter that adds random fixed-length payloads.
|
class |
MockGraphTokenFilter
Randomly inserts overlapped (posInc=0) tokens with
posLength sometimes > 1.
|
class |
MockHoleInjectingTokenFilter
Randomly injects holes (similar to what a stopfilter would do)
|
class |
MockRandomLookaheadTokenFilter
Uses
LookaheadTokenFilter to randomly peek at future tokens. |
class |
MockTokenFilter
A tokenfilter for testing that removes terms accepted by a DFA.
|
class |
MockTokenizer
Tokenizer for testing.
|
class |
MockVariableLengthPayloadFilter
TokenFilter that adds random variable-length payloads.
|
class |
NumericTokenStream
Expert: This class provides a
TokenStream
for indexing numeric values that can be used by NumericRangeQuery . |
class |
SimplePayloadFilter
Simple payload filter that sets the payload as pos: XXXX
|
class |
TokenFilter
A TokenFilter is a TokenStream whose input is another TokenStream.
|
class |
Tokenizer
A Tokenizer is a TokenStream whose input is a Reader.
|
class |
ValidatingTokenFilter
A TokenFilter that checks consistency of the tokens (eg
offsets are consistent with one another).
|
Modifier and Type | Field and Description |
---|---|
protected TokenStream |
TokenFilter.input
The source of tokens for this filter.
|
protected TokenStream |
Analyzer.TokenStreamComponents.sink
Sink tokenstream, such as the outer tokenfilter decorating
the chain.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
Analyzer.TokenStreamComponents.getTokenStream()
Returns the sink
TokenStream |
TokenStream |
Analyzer.tokenStream(String fieldName,
Reader reader)
Returns a TokenStream suitable for
fieldName , tokenizing
the contents of reader . |
TokenStream |
Analyzer.tokenStream(String fieldName,
String text)
Returns a TokenStream suitable for
fieldName , tokenizing
the contents of text . |
Modifier and Type | Method and Description |
---|---|
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] posIncrements) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] startOffsets,
int[] endOffsets) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] startOffsets,
int[] endOffsets,
int[] posIncrements) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] startOffsets,
int[] endOffsets,
int[] posIncrements,
int[] posLengths,
Integer finalOffset) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] startOffsets,
int[] endOffsets,
int[] posIncrements,
Integer finalOffset) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] startOffsets,
int[] endOffsets,
Integer finalOffset) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] startOffsets,
int[] endOffsets,
String[] types,
int[] posIncrements) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] startOffsets,
int[] endOffsets,
String[] types,
int[] posIncrements,
int[] posLengths,
Integer finalOffset) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] startOffsets,
int[] endOffsets,
String[] types,
int[] posIncrements,
int[] posLengths,
Integer finalOffset,
boolean offsetsAreCorrect) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] startOffsets,
int[] endOffsets,
String[] types,
int[] posIncrements,
int[] posLengths,
Integer finalOffset,
boolean[] keywordAtts,
boolean offsetsAreCorrect) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] startOffsets,
int[] endOffsets,
String[] types,
int[] posIncrements,
int[] posLengths,
Integer finalOffset,
Integer finalPosInc,
boolean[] keywordAtts,
boolean offsetsAreCorrect) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
int[] startOffsets,
int[] endOffsets,
String[] types,
int[] posIncrements,
Integer finalOffset) |
static void |
BaseTokenStreamTestCase.assertTokenStreamContents(TokenStream ts,
String[] output,
String[] types) |
Automaton |
TokenStreamToAutomaton.toAutomaton(TokenStream in)
Pulls the graph (including
PositionLengthAttribute ) from the provided TokenStream , and creates the corresponding
automaton where arcs are bytes (or Unicode code points
if unicodeArcs = true) from each term. |
Constructor and Description |
---|
CachingTokenFilter(TokenStream input)
Create a new CachingTokenFilter around
input . |
CrankyTokenFilter(TokenStream input,
Random random)
Creates a new CrankyTokenFilter
|
LookaheadTokenFilter(TokenStream input) |
MockFixedLengthPayloadFilter(Random random,
TokenStream in,
int length) |
MockGraphTokenFilter(Random random,
TokenStream input) |
MockHoleInjectingTokenFilter(Random random,
TokenStream in) |
MockRandomLookaheadTokenFilter(Random random,
TokenStream in) |
MockTokenFilter(TokenStream input,
CharacterRunAutomaton filter)
Create a new MockTokenFilter.
|
MockVariableLengthPayloadFilter(Random random,
TokenStream in) |
SimplePayloadFilter(TokenStream input) |
TokenFilter(TokenStream input)
Construct a token stream filtering the given input.
|
TokenStreamComponents(Tokenizer source,
TokenStream result)
Creates a new
Analyzer.TokenStreamComponents instance. |
TokenStreamToDot(String inputText,
TokenStream in,
PrintWriter out)
If inputText is non-null, and the TokenStream has
offsets, we include the surface form in each arc's
label.
|
ValidatingTokenFilter(TokenStream in,
String name,
boolean offsetsAreCorrect)
The name arg is used to identify this stage when
throwing exceptions (useful if you have more than one
instance in your chain).
|
Modifier and Type | Class and Description |
---|---|
class |
ArabicNormalizationFilter
A
TokenFilter that applies ArabicNormalizer to normalize the orthography. |
class |
ArabicStemFilter
A
TokenFilter that applies ArabicStemmer to stem Arabic words.. |
Modifier and Type | Method and Description |
---|---|
ArabicStemFilter |
ArabicStemFilterFactory.create(TokenStream input) |
ArabicNormalizationFilter |
ArabicNormalizationFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
ArabicNormalizationFilter(TokenStream input) |
ArabicStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
BulgarianStemFilter
A
TokenFilter that applies BulgarianStemmer to stem Bulgarian
words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
BulgarianStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
BulgarianStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
BulgarianStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
BrazilianStemFilter
A
TokenFilter that applies BrazilianStemmer . |
Modifier and Type | Method and Description |
---|---|
BrazilianStemFilter |
BrazilianStemFilterFactory.create(TokenStream in) |
Constructor and Description |
---|
BrazilianStemFilter(TokenStream in)
Creates a new BrazilianStemFilter
|
Modifier and Type | Class and Description |
---|---|
class |
CJKBigramFilter
Forms bigrams of CJK terms that are generated from StandardTokenizer
or ICUTokenizer.
|
class |
CJKWidthFilter
A
TokenFilter that normalizes CJK width differences:
Folds fullwidth ASCII variants into the equivalent basic latin
Folds halfwidth Katakana variants into the equivalent kana
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
CJKWidthFilterFactory.create(TokenStream input) |
TokenStream |
CJKBigramFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
CJKWidthFilterFactory.create(TokenStream input) |
TokenStream |
CJKBigramFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
CJKBigramFilter(TokenStream in)
|
CJKBigramFilter(TokenStream in,
int flags)
|
CJKBigramFilter(TokenStream in,
int flags,
boolean outputUnigrams)
Create a new CJKBigramFilter, specifying which writing systems should be bigrammed,
and whether or not unigrams should also be output.
|
CJKWidthFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
SoraniNormalizationFilter
A
TokenFilter that applies SoraniNormalizer to normalize the
orthography. |
class |
SoraniStemFilter
A
TokenFilter that applies SoraniStemmer to stem Sorani words. |
Modifier and Type | Method and Description |
---|---|
SoraniStemFilter |
SoraniStemFilterFactory.create(TokenStream input) |
SoraniNormalizationFilter |
SoraniNormalizationFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
SoraniNormalizationFilter(TokenStream input) |
SoraniStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
HMMChineseTokenizer
Tokenizer for Chinese or mixed Chinese-English text.
|
class |
SentenceTokenizer
Deprecated.
Use
HMMChineseTokenizer instead |
class |
WordTokenFilter
Deprecated.
Use
HMMChineseTokenizer instead. |
Modifier and Type | Method and Description |
---|---|
TokenFilter |
SmartChineseWordTokenFilterFactory.create(TokenStream input)
Deprecated.
|
Constructor and Description |
---|
WordTokenFilter(TokenStream in)
Deprecated.
Construct a new WordTokenizer.
|
Modifier and Type | Class and Description |
---|---|
class |
CommonGramsFilter
Construct bigrams for frequently occurring terms while indexing.
|
class |
CommonGramsQueryFilter
Wrap a CommonGramsFilter optimizing phrase queries by only returning single
words when they are not a member of a bigram.
|
Modifier and Type | Method and Description |
---|---|
TokenFilter |
CommonGramsFilterFactory.create(TokenStream input) |
TokenFilter |
CommonGramsQueryFilterFactory.create(TokenStream input)
Create a CommonGramsFilter and wrap it with a CommonGramsQueryFilter
|
Constructor and Description |
---|
CommonGramsFilter(TokenStream input,
CharArraySet commonWords)
Construct a token stream filtering the given input using a Set of common
words to create bigrams.
|
Modifier and Type | Class and Description |
---|---|
class |
CompoundWordTokenFilterBase
Base class for decomposition token filters.
|
class |
DictionaryCompoundWordTokenFilter
A
TokenFilter that decomposes compound words found in many Germanic languages. |
class |
HyphenationCompoundWordTokenFilter
A
TokenFilter that decomposes compound words found in many Germanic languages. |
class |
Lucene43CompoundWordTokenFilterBase
Deprecated.
|
class |
Lucene43DictionaryCompoundWordTokenFilter
Deprecated.
|
class |
Lucene43HyphenationCompoundWordTokenFilter
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
DictionaryCompoundWordTokenFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
DictionaryCompoundWordTokenFilterFactory.create(TokenStream input) |
TokenFilter |
HyphenationCompoundWordTokenFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
CompoundWordTokenFilterBase(TokenStream input,
CharArraySet dictionary) |
CompoundWordTokenFilterBase(TokenStream input,
CharArraySet dictionary,
boolean onlyLongestMatch) |
CompoundWordTokenFilterBase(TokenStream input,
CharArraySet dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch) |
DictionaryCompoundWordTokenFilter(TokenStream input,
CharArraySet dictionary)
Creates a new
DictionaryCompoundWordTokenFilter |
DictionaryCompoundWordTokenFilter(TokenStream input,
CharArraySet dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch)
Creates a new
DictionaryCompoundWordTokenFilter |
HyphenationCompoundWordTokenFilter(TokenStream input,
HyphenationTree hyphenator)
Create a HyphenationCompoundWordTokenFilter with no dictionary.
|
HyphenationCompoundWordTokenFilter(TokenStream input,
HyphenationTree hyphenator,
CharArraySet dictionary)
Creates a new
HyphenationCompoundWordTokenFilter instance. |
HyphenationCompoundWordTokenFilter(TokenStream input,
HyphenationTree hyphenator,
CharArraySet dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch)
Creates a new
HyphenationCompoundWordTokenFilter instance. |
HyphenationCompoundWordTokenFilter(TokenStream input,
HyphenationTree hyphenator,
int minWordSize,
int minSubwordSize,
int maxSubwordSize)
Create a HyphenationCompoundWordTokenFilter with no dictionary.
|
Lucene43CompoundWordTokenFilterBase(TokenStream input,
CharArraySet dictionary)
Deprecated.
|
Lucene43CompoundWordTokenFilterBase(TokenStream input,
CharArraySet dictionary,
boolean onlyLongestMatch)
Deprecated.
|
Lucene43CompoundWordTokenFilterBase(TokenStream input,
CharArraySet dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch)
Deprecated.
|
Lucene43DictionaryCompoundWordTokenFilter(TokenStream input,
CharArraySet dictionary)
Deprecated.
Creates a new
Lucene43DictionaryCompoundWordTokenFilter |
Lucene43DictionaryCompoundWordTokenFilter(TokenStream input,
CharArraySet dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch)
Deprecated.
Creates a new
Lucene43DictionaryCompoundWordTokenFilter |
Lucene43HyphenationCompoundWordTokenFilter(TokenStream input,
HyphenationTree hyphenator)
Deprecated.
Create a HyphenationCompoundWordTokenFilter with no dictionary.
|
Lucene43HyphenationCompoundWordTokenFilter(TokenStream input,
HyphenationTree hyphenator,
CharArraySet dictionary)
Deprecated.
Creates a new
Lucene43HyphenationCompoundWordTokenFilter instance. |
Lucene43HyphenationCompoundWordTokenFilter(TokenStream input,
HyphenationTree hyphenator,
CharArraySet dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch)
Deprecated.
Creates a new
Lucene43HyphenationCompoundWordTokenFilter instance. |
Lucene43HyphenationCompoundWordTokenFilter(TokenStream input,
HyphenationTree hyphenator,
int minWordSize,
int minSubwordSize,
int maxSubwordSize)
Deprecated.
Create a HyphenationCompoundWordTokenFilter with no dictionary.
|
Modifier and Type | Class and Description |
---|---|
class |
KeywordTokenizer
Emits the entire input as a single token.
|
class |
LetterTokenizer
A LetterTokenizer is a tokenizer that divides text at non-letters.
|
class |
LowerCaseFilter
Normalizes token text to lower case.
|
class |
LowerCaseTokenizer
LowerCaseTokenizer performs the function of LetterTokenizer
and LowerCaseFilter together.
|
class |
Lucene43StopFilter
Deprecated.
Use
StopFilter |
class |
Lucene43TypeTokenFilter
Deprecated.
Use
TypeTokenFilter |
class |
StopFilter
Removes stop words from a token stream.
|
class |
TypeTokenFilter
Removes tokens whose types appear in a set of blocked types from a token stream.
|
class |
UpperCaseFilter
Normalizes token text to UPPER CASE.
|
class |
WhitespaceTokenizer
A WhitespaceTokenizer is a tokenizer that divides text at whitespace.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
TypeTokenFilterFactory.create(TokenStream input) |
TokenStream |
StopFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
TypeTokenFilterFactory.create(TokenStream input) |
TokenStream |
StopFilterFactory.create(TokenStream input) |
LowerCaseFilter |
LowerCaseFilterFactory.create(TokenStream input) |
UpperCaseFilter |
UpperCaseFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
LowerCaseFilter(TokenStream in)
Create a new LowerCaseFilter, that normalizes token text to lower case.
|
Lucene43StopFilter(boolean enablePositionIncrements,
TokenStream in,
CharArraySet stopWords)
Deprecated.
|
Lucene43TypeTokenFilter(boolean enablePositionIncrements,
TokenStream input,
Set<String> stopTypes,
boolean useWhiteList)
Deprecated.
|
StopFilter(TokenStream in,
CharArraySet stopWords)
Constructs a filter which removes words from the input TokenStream that are
named in the Set.
|
TypeTokenFilter(TokenStream input,
Set<String> stopTypes)
Create a new
TypeTokenFilter that filters tokens out
(useWhiteList=false). |
TypeTokenFilter(TokenStream input,
Set<String> stopTypes,
boolean useWhiteList)
Create a new
TypeTokenFilter . |
UpperCaseFilter(TokenStream in)
Create a new UpperCaseFilter, that normalizes token text to upper case.
|
Modifier and Type | Class and Description |
---|---|
class |
CzechStemFilter
A
TokenFilter that applies CzechStemmer to stem Czech words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
CzechStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
CzechStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
CzechStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
GermanLightStemFilter
A
TokenFilter that applies GermanLightStemmer to stem German
words. |
class |
GermanMinimalStemFilter
A
TokenFilter that applies GermanMinimalStemmer to stem German
words. |
class |
GermanNormalizationFilter
Normalizes German characters according to the heuristics
of the
German2 snowball algorithm.
|
class |
GermanStemFilter
A
TokenFilter that stems German words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
GermanNormalizationFilterFactory.create(TokenStream input) |
TokenStream |
GermanMinimalStemFilterFactory.create(TokenStream input) |
TokenStream |
GermanLightStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
GermanNormalizationFilterFactory.create(TokenStream input) |
TokenStream |
GermanMinimalStemFilterFactory.create(TokenStream input) |
GermanStemFilter |
GermanStemFilterFactory.create(TokenStream in) |
TokenStream |
GermanLightStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
GermanLightStemFilter(TokenStream input) |
GermanMinimalStemFilter(TokenStream input) |
GermanNormalizationFilter(TokenStream input) |
GermanStemFilter(TokenStream in)
Creates a
GermanStemFilter instance |
Modifier and Type | Class and Description |
---|---|
class |
GreekLowerCaseFilter
Normalizes token text to lower case, removes some Greek diacritics,
and standardizes final sigma to sigma.
|
class |
GreekStemFilter
A
TokenFilter that applies GreekStemmer to stem Greek
words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
GreekStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
GreekStemFilterFactory.create(TokenStream input) |
GreekLowerCaseFilter |
GreekLowerCaseFilterFactory.create(TokenStream in) |
Constructor and Description |
---|
GreekLowerCaseFilter(TokenStream in)
Create a GreekLowerCaseFilter that normalizes Greek token text.
|
GreekStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
EnglishMinimalStemFilter
A
TokenFilter that applies EnglishMinimalStemmer to stem
English words. |
class |
EnglishPossessiveFilter
TokenFilter that removes possessives (trailing 's) from words.
|
class |
KStemFilter
A high-performance kstem filter for english.
|
class |
PorterStemFilter
Transforms the token stream as per the Porter stemming algorithm.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
EnglishMinimalStemFilterFactory.create(TokenStream input) |
TokenStream |
EnglishPossessiveFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
EnglishMinimalStemFilterFactory.create(TokenStream input) |
TokenFilter |
KStemFilterFactory.create(TokenStream input) |
PorterStemFilter |
PorterStemFilterFactory.create(TokenStream input) |
TokenStream |
EnglishPossessiveFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
EnglishMinimalStemFilter(TokenStream input) |
EnglishPossessiveFilter(TokenStream input) |
KStemFilter(TokenStream in) |
PorterStemFilter(TokenStream in) |
Modifier and Type | Class and Description |
---|---|
class |
SpanishLightStemFilter
A
TokenFilter that applies SpanishLightStemmer to stem Spanish
words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
SpanishLightStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
SpanishLightStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
SpanishLightStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
PersianNormalizationFilter
A
TokenFilter that applies PersianNormalizer to normalize the
orthography. |
Modifier and Type | Method and Description |
---|---|
PersianNormalizationFilter |
PersianNormalizationFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
PersianNormalizationFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
FinnishLightStemFilter
A
TokenFilter that applies FinnishLightStemmer to stem Finnish
words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
FinnishLightStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
FinnishLightStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
FinnishLightStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
FrenchLightStemFilter
A
TokenFilter that applies FrenchLightStemmer to stem French
words. |
class |
FrenchMinimalStemFilter
A
TokenFilter that applies FrenchMinimalStemmer to stem French
words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
FrenchMinimalStemFilterFactory.create(TokenStream input) |
TokenStream |
FrenchLightStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
FrenchMinimalStemFilterFactory.create(TokenStream input) |
TokenStream |
FrenchLightStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
FrenchLightStemFilter(TokenStream input) |
FrenchMinimalStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
IrishLowerCaseFilter
Normalises token text to lower case, handling t-prothesis
and n-eclipsis (i.e., that 'nAthair' should become 'n-athair')
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
IrishLowerCaseFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
IrishLowerCaseFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
IrishLowerCaseFilter(TokenStream in)
Create an IrishLowerCaseFilter that normalises Irish token text.
|
Modifier and Type | Class and Description |
---|---|
class |
GalicianMinimalStemFilter
A
TokenFilter that applies GalicianMinimalStemmer to stem
Galician words. |
class |
GalicianStemFilter
A
TokenFilter that applies GalicianStemmer to stem
Galician words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
GalicianStemFilterFactory.create(TokenStream input) |
TokenStream |
GalicianMinimalStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
GalicianStemFilterFactory.create(TokenStream input) |
TokenStream |
GalicianMinimalStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
GalicianMinimalStemFilter(TokenStream input) |
GalicianStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
HindiNormalizationFilter
A
TokenFilter that applies HindiNormalizer to normalize the
orthography. |
class |
HindiStemFilter
A
TokenFilter that applies HindiStemmer to stem Hindi words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
HindiNormalizationFilterFactory.create(TokenStream input) |
TokenStream |
HindiStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
HindiNormalizationFilterFactory.create(TokenStream input) |
TokenStream |
HindiStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
HindiNormalizationFilter(TokenStream input) |
HindiStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
HungarianLightStemFilter
A
TokenFilter that applies HungarianLightStemmer to stem
Hungarian words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
HungarianLightStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
HungarianLightStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
HungarianLightStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
HunspellStemFilter
TokenFilter that uses hunspell affix rules and words to stem tokens.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
HunspellStemFilterFactory.create(TokenStream tokenStream) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
HunspellStemFilterFactory.create(TokenStream tokenStream) |
Constructor and Description |
---|
HunspellStemFilter(TokenStream input,
Dictionary dictionary)
Create a
HunspellStemFilter outputting all possible stems. |
HunspellStemFilter(TokenStream input,
Dictionary dictionary,
boolean dedup)
Create a
HunspellStemFilter outputting all possible stems. |
HunspellStemFilter(TokenStream input,
Dictionary dictionary,
boolean dedup,
boolean longestOnly)
Creates a new HunspellStemFilter that will stem tokens from the given TokenStream using affix rules in the provided
Dictionary
|
Modifier and Type | Class and Description |
---|---|
class |
ICUFoldingFilter
A TokenFilter that applies search term folding to Unicode text,
applying foldings from UTR#30 Character Foldings.
|
class |
ICUNormalizer2Filter
Normalize token text with ICU's
Normalizer2 |
class |
ICUTransformFilter
A
TokenFilter that transforms text with ICU. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
ICUNormalizer2FilterFactory.create(TokenStream input) |
TokenStream |
ICUTransformFilterFactory.create(TokenStream input) |
TokenStream |
ICUFoldingFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
ICUNormalizer2FilterFactory.create(TokenStream input) |
TokenStream |
ICUTransformFilterFactory.create(TokenStream input) |
TokenStream |
ICUFoldingFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
ICUFoldingFilter(TokenStream input)
Create a new ICUFoldingFilter on the specified input
|
ICUNormalizer2Filter(TokenStream input)
Create a new Normalizer2Filter that combines NFKC normalization, Case
Folding, and removes Default Ignorables (NFKC_Casefold)
|
ICUNormalizer2Filter(TokenStream input,
com.ibm.icu.text.Normalizer2 normalizer)
Create a new Normalizer2Filter with the specified Normalizer2
|
ICUTransformFilter(TokenStream input,
com.ibm.icu.text.Transliterator transform)
Create a new ICUTransformFilter that transforms text on the given stream.
|
Modifier and Type | Class and Description |
---|---|
class |
ICUTokenizer
Breaks text into words according to UAX #29: Unicode Text Segmentation
(http://www.unicode.org/reports/tr29/)
|
Modifier and Type | Class and Description |
---|---|
class |
IndonesianStemFilter
A
TokenFilter that applies IndonesianStemmer to stem Indonesian words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
IndonesianStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
IndonesianStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
IndonesianStemFilter(TokenStream input)
|
IndonesianStemFilter(TokenStream input,
boolean stemDerivational)
Create a new IndonesianStemFilter.
|
Modifier and Type | Class and Description |
---|---|
class |
IndicNormalizationFilter
A
TokenFilter that applies IndicNormalizer to normalize text
in Indian Languages. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
IndicNormalizationFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
IndicNormalizationFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
IndicNormalizationFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
ItalianLightStemFilter
A
TokenFilter that applies ItalianLightStemmer to stem Italian
words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
ItalianLightStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
ItalianLightStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
ItalianLightStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
JapaneseBaseFormFilter
Replaces term text with the
BaseFormAttribute . |
class |
JapaneseKatakanaStemFilter
A
TokenFilter that normalizes common katakana spelling variations
ending in a long sound character by removing this character (U+30FC). |
class |
JapanesePartOfSpeechStopFilter
Removes tokens that match a set of part-of-speech tags.
|
class |
JapaneseReadingFormFilter
A
TokenFilter that replaces the term
attribute with the reading of a token in either katakana or romaji form. |
class |
JapaneseTokenizer
Tokenizer for Japanese that uses morphological analysis.
|
class |
Lucene43JapanesePartOfSpeechStopFilter
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
JapaneseReadingFormFilterFactory.create(TokenStream input) |
TokenStream |
JapaneseKatakanaStemFilterFactory.create(TokenStream input) |
TokenStream |
JapaneseBaseFormFilterFactory.create(TokenStream input) |
TokenStream |
JapanesePartOfSpeechStopFilterFactory.create(TokenStream stream) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
JapaneseReadingFormFilterFactory.create(TokenStream input) |
TokenStream |
JapaneseKatakanaStemFilterFactory.create(TokenStream input) |
TokenStream |
JapaneseBaseFormFilterFactory.create(TokenStream input) |
TokenStream |
JapanesePartOfSpeechStopFilterFactory.create(TokenStream stream) |
Constructor and Description |
---|
JapaneseBaseFormFilter(TokenStream input) |
JapaneseKatakanaStemFilter(TokenStream input) |
JapaneseKatakanaStemFilter(TokenStream input,
int minimumLength) |
JapanesePartOfSpeechStopFilter(TokenStream input,
Set<String> stopTags)
Create a new
JapanesePartOfSpeechStopFilter . |
JapaneseReadingFormFilter(TokenStream input) |
JapaneseReadingFormFilter(TokenStream input,
boolean useRomaji) |
Lucene43JapanesePartOfSpeechStopFilter(boolean enablePositionIncrements,
TokenStream input,
Set<String> stopTags)
Deprecated.
Create a new
JapanesePartOfSpeechStopFilter . |
Modifier and Type | Class and Description |
---|---|
class |
LatvianStemFilter
A
TokenFilter that applies LatvianStemmer to stem Latvian
words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
LatvianStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
LatvianStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
LatvianStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
ASCIIFoldingFilter
This class converts alphabetic, numeric, and symbolic Unicode characters
which are not in the first 127 ASCII characters (the "Basic Latin" Unicode
block) into their ASCII equivalents, if one exists.
|
class |
CapitalizationFilter
A filter to apply normal capitalization rules to Tokens.
|
class |
CodepointCountFilter
Removes words that are too long or too short from the stream.
|
class |
EmptyTokenStream
An always exhausted token stream.
|
class |
HyphenatedWordsFilter
When the plain text is extracted from documents, we will often have many words hyphenated and broken into
two lines.
|
class |
KeepWordFilter
A TokenFilter that only keeps tokens with text contained in the
required words.
|
class |
KeywordMarkerFilter
Marks terms as keywords via the
KeywordAttribute . |
class |
KeywordRepeatFilter
This TokenFilter emits each incoming token twice once as keyword and once non-keyword, in other words once with
KeywordAttribute.setKeyword(boolean) set to true and once set to false . |
class |
LengthFilter
Removes words that are too long or too short from the stream.
|
class |
LimitTokenCountFilter
This TokenFilter limits the number of tokens while indexing.
|
class |
LimitTokenOffsetFilter
Lets all tokens pass through until it sees one with a start offset <= a
configured limit, which won't pass and ends the stream.
|
class |
LimitTokenPositionFilter
This TokenFilter limits its emitted tokens to those with positions that
are not greater than the configured limit.
|
class |
Lucene43KeepWordFilter
Deprecated.
Use
KeepWordFilter |
class |
Lucene43LengthFilter
Deprecated.
Use
LengthFilter |
class |
Lucene43TrimFilter
Deprecated.
Use
TrimFilter |
class |
Lucene47WordDelimiterFilter
Deprecated.
|
class |
PatternKeywordMarkerFilter
Marks terms as keywords via the
KeywordAttribute . |
class |
PrefixAndSuffixAwareTokenFilter
Links two
PrefixAwareTokenFilter . |
class |
PrefixAwareTokenFilter
Joins two token streams and leaves the last token of the first stream available
to be used when updating the token values in the second stream based on that token.
|
class |
RemoveDuplicatesTokenFilter
A TokenFilter which filters out Tokens at the same position and Term text as the previous token in the stream.
|
class |
ScandinavianFoldingFilter
This filter folds Scandinavian characters Ă„Ă
ÀÊĂĂ->a and öĂĂžĂ->o.
|
class |
ScandinavianNormalizationFilter
This filter normalize use of the interchangeable Scandinavian characters ĂŠĂĂ€ĂöĂĂžĂ
and folded variants (aa, ao, ae, oe and oo) by transforming them to Ă„Ă
ĂŠĂĂžĂ.
|
class |
SetKeywordMarkerFilter
Marks terms as keywords via the
KeywordAttribute . |
class |
SingleTokenTokenStream
Deprecated.
Do not use this anymore!
|
class |
StemmerOverrideFilter
Provides the ability to override any
KeywordAttribute aware stemmer
with custom dictionary-based stemming. |
class |
TrimFilter
Trims leading and trailing whitespace from Tokens in the stream.
|
class |
TruncateTokenFilter
A token filter for truncating the terms into a specific length.
|
class |
WordDelimiterFilter
Splits words into subwords and performs optional transformations on subword
groups.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
TruncateTokenFilterFactory.create(TokenStream input) |
TokenStream |
LimitTokenOffsetFilterFactory.create(TokenStream input) |
TokenStream |
KeywordRepeatFilterFactory.create(TokenStream input) |
TokenStream |
KeepWordFilterFactory.create(TokenStream input) |
TokenStream |
LimitTokenCountFilterFactory.create(TokenStream input) |
TokenStream |
KeywordMarkerFilterFactory.create(TokenStream input) |
TokenStream |
StemmerOverrideFilterFactory.create(TokenStream input) |
TokenStream |
LimitTokenPositionFilterFactory.create(TokenStream input) |
TokenStream |
PrefixAwareTokenFilter.getPrefix() |
TokenStream |
PrefixAwareTokenFilter.getSuffix() |
Modifier and Type | Method and Description |
---|---|
TokenStream |
TruncateTokenFilterFactory.create(TokenStream input) |
TokenFilter |
LengthFilterFactory.create(TokenStream input) |
TokenStream |
LimitTokenOffsetFilterFactory.create(TokenStream input) |
RemoveDuplicatesTokenFilter |
RemoveDuplicatesTokenFilterFactory.create(TokenStream input) |
CapitalizationFilter |
CapitalizationFilterFactory.create(TokenStream input) |
CodepointCountFilter |
CodepointCountFilterFactory.create(TokenStream input) |
TokenFilter |
TrimFilterFactory.create(TokenStream input) |
TokenStream |
KeywordRepeatFilterFactory.create(TokenStream input) |
TokenStream |
KeepWordFilterFactory.create(TokenStream input) |
TokenStream |
LimitTokenCountFilterFactory.create(TokenStream input) |
TokenFilter |
WordDelimiterFilterFactory.create(TokenStream input) |
ScandinavianNormalizationFilter |
ScandinavianNormalizationFilterFactory.create(TokenStream input) |
ASCIIFoldingFilter |
ASCIIFoldingFilterFactory.create(TokenStream input) |
TokenStream |
KeywordMarkerFilterFactory.create(TokenStream input) |
HyphenatedWordsFilter |
HyphenatedWordsFilterFactory.create(TokenStream input) |
ScandinavianFoldingFilter |
ScandinavianFoldingFilterFactory.create(TokenStream input) |
TokenStream |
StemmerOverrideFilterFactory.create(TokenStream input) |
TokenStream |
LimitTokenPositionFilterFactory.create(TokenStream input) |
void |
PrefixAwareTokenFilter.setPrefix(TokenStream prefix) |
void |
PrefixAwareTokenFilter.setSuffix(TokenStream suffix) |
Constructor and Description |
---|
ASCIIFoldingFilter(TokenStream input) |
ASCIIFoldingFilter(TokenStream input,
boolean preserveOriginal)
Create a new
ASCIIFoldingFilter . |
CapitalizationFilter(TokenStream in)
Creates a CapitalizationFilter with the default parameters.
|
CapitalizationFilter(TokenStream in,
boolean onlyFirstWord,
CharArraySet keep,
boolean forceFirstLetter,
Collection<char[]> okPrefix,
int minWordLength,
int maxWordCount,
int maxTokenLength)
Creates a CapitalizationFilter with the specified parameters.
|
CodepointCountFilter(TokenStream in,
int min,
int max)
Create a new
CodepointCountFilter . |
HyphenatedWordsFilter(TokenStream in)
Creates a new HyphenatedWordsFilter
|
KeepWordFilter(TokenStream in,
CharArraySet words)
Create a new
KeepWordFilter . |
KeywordMarkerFilter(TokenStream in)
Creates a new
KeywordMarkerFilter |
KeywordRepeatFilter(TokenStream input)
Construct a token stream filtering the given input.
|
LengthFilter(TokenStream in,
int min,
int max)
Create a new
LengthFilter . |
LimitTokenCountFilter(TokenStream in,
int maxTokenCount)
Build a filter that only accepts tokens up to a maximum number.
|
LimitTokenCountFilter(TokenStream in,
int maxTokenCount,
boolean consumeAllTokens)
Build an filter that limits the maximum number of tokens per field.
|
LimitTokenOffsetFilter(TokenStream input,
int maxStartOffset)
Lets all tokens pass through until it sees one with a start offset <=
maxStartOffset
which won't pass and ends the stream. |
LimitTokenOffsetFilter(TokenStream input,
int maxStartOffset,
boolean consumeAllTokens) |
LimitTokenPositionFilter(TokenStream in,
int maxTokenPosition)
Build a filter that only accepts tokens up to and including the given maximum position.
|
LimitTokenPositionFilter(TokenStream in,
int maxTokenPosition,
boolean consumeAllTokens)
Build a filter that limits the maximum position of tokens to emit.
|
Lucene43KeepWordFilter(boolean enablePositionIncrements,
TokenStream in,
CharArraySet words)
Deprecated.
The words set passed to this constructor will be directly used by this filter
and should not be modified,
|
Lucene43LengthFilter(boolean enablePositionIncrements,
TokenStream in,
int min,
int max)
Deprecated.
Build a filter that removes words that are too long or too
short from the text.
|
Lucene43TrimFilter(TokenStream in,
boolean updateOffsets)
Deprecated.
|
Lucene47WordDelimiterFilter(TokenStream in,
byte[] charTypeTable,
int configurationFlags,
CharArraySet protWords)
Deprecated.
Creates a new WordDelimiterFilter
|
Lucene47WordDelimiterFilter(TokenStream in,
int configurationFlags,
CharArraySet protWords)
Deprecated.
Creates a new WordDelimiterFilter using
WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE
as its charTypeTable |
PatternKeywordMarkerFilter(TokenStream in,
Pattern pattern)
Create a new
PatternKeywordMarkerFilter , that marks the current
token as a keyword if the tokens term buffer matches the provided
Pattern via the KeywordAttribute . |
PrefixAndSuffixAwareTokenFilter(TokenStream prefix,
TokenStream input,
TokenStream suffix) |
PrefixAwareTokenFilter(TokenStream prefix,
TokenStream suffix) |
RemoveDuplicatesTokenFilter(TokenStream in)
Creates a new RemoveDuplicatesTokenFilter
|
ScandinavianFoldingFilter(TokenStream input) |
ScandinavianNormalizationFilter(TokenStream input) |
SetKeywordMarkerFilter(TokenStream in,
CharArraySet keywordSet)
Create a new KeywordSetMarkerFilter, that marks the current token as a
keyword if the tokens term buffer is contained in the given set via the
KeywordAttribute . |
StemmerOverrideFilter(TokenStream input,
StemmerOverrideFilter.StemmerOverrideMap stemmerOverrideMap)
Create a new StemmerOverrideFilter, performing dictionary-based stemming
with the provided
dictionary . |
TrimFilter(TokenStream in)
Create a new
TrimFilter . |
TruncateTokenFilter(TokenStream input,
int length) |
WordDelimiterFilter(TokenStream in,
byte[] charTypeTable,
int configurationFlags,
CharArraySet protWords)
Creates a new WordDelimiterFilter
|
WordDelimiterFilter(TokenStream in,
int configurationFlags,
CharArraySet protWords)
Creates a new WordDelimiterFilter using
WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE
as its charTypeTable |
Modifier and Type | Class and Description |
---|---|
class |
MorfologikFilter
TokenFilter using Morfologik library to transform input tokens into lemma and
morphosyntactic (POS) tokens. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
MorfologikFilterFactory.create(TokenStream ts) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
MorfologikFilterFactory.create(TokenStream ts) |
Constructor and Description |
---|
MorfologikFilter(TokenStream in)
Creates a filter with the default (Polish) dictionary.
|
MorfologikFilter(TokenStream in,
String dict)
Creates a filter with a given dictionary resource.
|
Modifier and Type | Class and Description |
---|---|
class |
EdgeNGramTokenFilter
Tokenizes the given token into n-grams of given size(s).
|
class |
EdgeNGramTokenizer
Tokenizes the input from an edge into n-grams of given size(s).
|
class |
Lucene43EdgeNGramTokenFilter
Deprecated.
Use
EdgeNGramTokenFilter . |
class |
Lucene43EdgeNGramTokenizer
Deprecated.
|
class |
Lucene43NGramTokenFilter
Deprecated.
Use
NGramTokenFilter instead. |
class |
Lucene43NGramTokenizer
Deprecated.
|
class |
NGramTokenFilter
Tokenizes the input into n-grams of the given size(s).
|
class |
NGramTokenizer
Tokenizes the input into n-grams of the given size(s).
|
Modifier and Type | Method and Description |
---|---|
TokenFilter |
NGramFilterFactory.create(TokenStream input) |
TokenFilter |
EdgeNGramFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
EdgeNGramTokenFilter(TokenStream input,
int minGram,
int maxGram)
Creates EdgeNGramTokenFilter that can generate n-grams in the sizes of the given range
|
Lucene43EdgeNGramTokenFilter(TokenStream input,
int minGram,
int maxGram)
Deprecated.
Creates EdgeNGramTokenFilter that can generate n-grams in the sizes of the given range
|
Lucene43NGramTokenFilter(TokenStream input)
Deprecated.
Creates NGramTokenFilter with default min and max n-grams.
|
Lucene43NGramTokenFilter(TokenStream input,
int minGram,
int maxGram)
Deprecated.
Creates Lucene43NGramTokenFilter with given min and max n-grams.
|
NGramTokenFilter(TokenStream input)
Creates NGramTokenFilter with default min and max n-grams.
|
NGramTokenFilter(TokenStream input,
int minGram,
int maxGram)
Creates NGramTokenFilter with given min and max n-grams.
|
Modifier and Type | Class and Description |
---|---|
class |
NorwegianLightStemFilter
A
TokenFilter that applies NorwegianLightStemmer to stem Norwegian
words. |
class |
NorwegianMinimalStemFilter
A
TokenFilter that applies NorwegianMinimalStemmer to stem Norwegian
words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
NorwegianLightStemFilterFactory.create(TokenStream input) |
TokenStream |
NorwegianMinimalStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
NorwegianLightStemFilterFactory.create(TokenStream input) |
TokenStream |
NorwegianMinimalStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
NorwegianLightStemFilter(TokenStream input)
|
NorwegianLightStemFilter(TokenStream input,
int flags)
Creates a new NorwegianLightStemFilter
|
NorwegianMinimalStemFilter(TokenStream input)
|
NorwegianMinimalStemFilter(TokenStream input,
int flags)
Creates a new NorwegianLightStemFilter
|
Modifier and Type | Class and Description |
---|---|
class |
PathHierarchyTokenizer
Tokenizer for path-like hierarchies.
|
class |
ReversePathHierarchyTokenizer
Tokenizer for domain-like hierarchies.
|
Modifier and Type | Class and Description |
---|---|
class |
PatternCaptureGroupTokenFilter
CaptureGroup uses Java regexes to emit multiple tokens - one for each capture
group in one or more patterns.
|
class |
PatternReplaceFilter
A TokenFilter which applies a Pattern to each token in the stream,
replacing match occurances with the specified replacement string.
|
class |
PatternTokenizer
This tokenizer uses regex pattern matching to construct distinct tokens
for the input stream.
|
Modifier and Type | Method and Description |
---|---|
PatternReplaceFilter |
PatternReplaceFilterFactory.create(TokenStream input) |
PatternCaptureGroupTokenFilter |
PatternCaptureGroupFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
PatternCaptureGroupTokenFilter(TokenStream input,
boolean preserveOriginal,
Pattern... patterns) |
PatternReplaceFilter(TokenStream in,
Pattern p,
String replacement,
boolean all)
Constructs an instance to replace either the first, or all occurances
|
Modifier and Type | Class and Description |
---|---|
class |
DelimitedPayloadTokenFilter
Characters before the delimiter are the "token", those after are the payload.
|
class |
NumericPayloadTokenFilter
Assigns a payload to a token based on the
PackedTokenAttributeImpl.type() |
class |
TokenOffsetPayloadTokenFilter
Adds the
OffsetAttribute.startOffset()
and OffsetAttribute.endOffset()
First 4 bytes are the start |
class |
TypeAsPayloadTokenFilter
Makes the
PackedTokenAttributeImpl.type() a payload. |
Modifier and Type | Method and Description |
---|---|
TypeAsPayloadTokenFilter |
TypeAsPayloadTokenFilterFactory.create(TokenStream input) |
TokenOffsetPayloadTokenFilter |
TokenOffsetPayloadTokenFilterFactory.create(TokenStream input) |
NumericPayloadTokenFilter |
NumericPayloadTokenFilterFactory.create(TokenStream input) |
DelimitedPayloadTokenFilter |
DelimitedPayloadTokenFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
DelimitedPayloadTokenFilter(TokenStream input,
char delimiter,
PayloadEncoder encoder) |
NumericPayloadTokenFilter(TokenStream input,
float payload,
String typeMatch) |
TokenOffsetPayloadTokenFilter(TokenStream input) |
TypeAsPayloadTokenFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
BeiderMorseFilter
TokenFilter for Beider-Morse phonetic encoding.
|
class |
DaitchMokotoffSoundexFilter
Create tokens for phonetic matches based on DaitchâMokotoff Soundex.
|
class |
DoubleMetaphoneFilter
Filter for DoubleMetaphone (supporting secondary codes)
|
class |
PhoneticFilter
Create tokens for phonetic matches.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
BeiderMorseFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
DaitchMokotoffSoundexFilter |
DaitchMokotoffSoundexFilterFactory.create(TokenStream input) |
DoubleMetaphoneFilter |
DoubleMetaphoneFilterFactory.create(TokenStream input) |
PhoneticFilter |
PhoneticFilterFactory.create(TokenStream input) |
TokenStream |
BeiderMorseFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
BeiderMorseFilter(TokenStream input,
org.apache.commons.codec.language.bm.PhoneticEngine engine)
|
BeiderMorseFilter(TokenStream input,
org.apache.commons.codec.language.bm.PhoneticEngine engine,
org.apache.commons.codec.language.bm.Languages.LanguageSet languages)
Create a new BeiderMorseFilter
|
DaitchMokotoffSoundexFilter(TokenStream in,
boolean inject)
Creates a DaitchMokotoffSoundexFilter by either adding encoded forms as synonyms (
inject=true ) or replacing them. |
DoubleMetaphoneFilter(TokenStream input,
int maxCodeLength,
boolean inject)
Creates a DoubleMetaphoneFilter with the specified maximum code length,
and either adding encoded forms as synonyms (
inject=true ) or
replacing them. |
PhoneticFilter(TokenStream in,
org.apache.commons.codec.Encoder encoder,
boolean inject)
Creates a PhoneticFilter with the specified encoder, and either
adding encoded forms as synonyms (
inject=true ) or
replacing them. |
Modifier and Type | Class and Description |
---|---|
class |
PortugueseLightStemFilter
A
TokenFilter that applies PortugueseLightStemmer to stem
Portuguese words. |
class |
PortugueseMinimalStemFilter
A
TokenFilter that applies PortugueseMinimalStemmer to stem
Portuguese words. |
class |
PortugueseStemFilter
A
TokenFilter that applies PortugueseStemmer to stem
Portuguese words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
PortugueseMinimalStemFilterFactory.create(TokenStream input) |
TokenStream |
PortugueseStemFilterFactory.create(TokenStream input) |
TokenStream |
PortugueseLightStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
PortugueseMinimalStemFilterFactory.create(TokenStream input) |
TokenStream |
PortugueseStemFilterFactory.create(TokenStream input) |
TokenStream |
PortugueseLightStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
PortugueseLightStemFilter(TokenStream input) |
PortugueseMinimalStemFilter(TokenStream input) |
PortugueseStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
ReverseStringFilter
Reverse token string, for example "country" => "yrtnuoc".
|
Modifier and Type | Method and Description |
---|---|
ReverseStringFilter |
ReverseStringFilterFactory.create(TokenStream in) |
Constructor and Description |
---|
ReverseStringFilter(TokenStream in)
Create a new ReverseStringFilter that reverses all tokens in the
supplied
TokenStream . |
ReverseStringFilter(TokenStream in,
char marker)
Create a new ReverseStringFilter that reverses and marks all tokens in the
supplied
TokenStream . |
Modifier and Type | Class and Description |
---|---|
class |
RussianLightStemFilter
A
TokenFilter that applies RussianLightStemmer to stem Russian
words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
RussianLightStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
RussianLightStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
RussianLightStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
ShingleFilter
A ShingleFilter constructs shingles (token n-grams) from a token stream.
|
Modifier and Type | Method and Description |
---|---|
ShingleFilter |
ShingleFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
ShingleFilter(TokenStream input)
Construct a ShingleFilter with default shingle size: 2.
|
ShingleFilter(TokenStream input,
int maxShingleSize)
Constructs a ShingleFilter with the specified shingle size from the
TokenStream input |
ShingleFilter(TokenStream input,
int minShingleSize,
int maxShingleSize)
Constructs a ShingleFilter with the specified shingle size from the
TokenStream input |
ShingleFilter(TokenStream input,
String tokenType)
Construct a ShingleFilter with the specified token type for shingle tokens
and the default shingle size: 2
|
Modifier and Type | Class and Description |
---|---|
class |
TeeSinkTokenFilter
This TokenFilter provides the ability to set aside attribute states
that have already been analyzed.
|
static class |
TeeSinkTokenFilter.SinkTokenStream
TokenStream output from a tee with optional filtering.
|
Constructor and Description |
---|
TeeSinkTokenFilter(TokenStream input)
Instantiates a new TeeSinkTokenFilter.
|
Modifier and Type | Class and Description |
---|---|
class |
SnowballFilter
A filter that stems words using a Snowball-generated stemmer.
|
Modifier and Type | Method and Description |
---|---|
TokenFilter |
SnowballPorterFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
SnowballFilter(TokenStream input,
SnowballProgram stemmer) |
SnowballFilter(TokenStream in,
String name)
Construct the named stemming filter.
|
Modifier and Type | Class and Description |
---|---|
class |
SerbianNormalizationFilter
Normalizes Serbian Cyrillic and Latin characters to "bald" Latin.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
SerbianNormalizationFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
SerbianNormalizationFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
SerbianNormalizationFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
ClassicFilter
Normalizes tokens extracted with
ClassicTokenizer . |
class |
ClassicTokenizer
A grammar-based tokenizer constructed with JFlex
|
class |
StandardFilter
Normalizes tokens extracted with
StandardTokenizer . |
class |
StandardTokenizer
A grammar-based tokenizer constructed with JFlex.
|
class |
UAX29URLEmailTokenizer
This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
Unicode Standard Annex #29
URLs and email addresses are also tokenized according to the relevant RFCs.
|
Modifier and Type | Method and Description |
---|---|
StandardFilter |
StandardFilterFactory.create(TokenStream input) |
TokenFilter |
ClassicFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
ClassicFilter(TokenStream in)
Construct filtering in.
|
StandardFilter(TokenStream in) |
Modifier and Type | Class and Description |
---|---|
class |
StandardTokenizer40
Deprecated.
|
class |
UAX29URLEmailTokenizer40
Deprecated.
|
Modifier and Type | Class and Description |
---|---|
class |
StempelFilter
Transforms the token stream as per the stemming algorithm.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
StempelPolishStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
StempelPolishStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
StempelFilter(TokenStream in,
StempelStemmer stemmer)
Create filter using the supplied stemming table.
|
StempelFilter(TokenStream in,
StempelStemmer stemmer,
int minLength)
Create filter using the supplied stemming table.
|
Modifier and Type | Class and Description |
---|---|
class |
SwedishLightStemFilter
A
TokenFilter that applies SwedishLightStemmer to stem Swedish
words. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
SwedishLightStemFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
SwedishLightStemFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
SwedishLightStemFilter(TokenStream input) |
Modifier and Type | Class and Description |
---|---|
class |
SynonymFilter
Matches single or multi word synonyms in a token stream.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
SynonymFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
SynonymFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
SynonymFilter(TokenStream input,
SynonymMap synonyms,
boolean ignoreCase) |
Modifier and Type | Class and Description |
---|---|
class |
ThaiTokenizer
Tokenizer that use
BreakIterator to tokenize Thai text. |
class |
ThaiWordFilter
Deprecated.
Use
ThaiTokenizer instead. |
Modifier and Type | Method and Description |
---|---|
ThaiWordFilter |
ThaiWordFilterFactory.create(TokenStream input)
Deprecated.
|
Constructor and Description |
---|
ThaiWordFilter(TokenStream input)
Deprecated.
Creates a new ThaiWordFilter with the specified match version.
|
Modifier and Type | Class and Description |
---|---|
class |
ApostropheFilter
Strips all characters after an apostrophe (including the apostrophe itself).
|
class |
TurkishLowerCaseFilter
Normalizes Turkish token text to lower case.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
TurkishLowerCaseFilterFactory.create(TokenStream input) |
TokenStream |
ApostropheFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
TurkishLowerCaseFilterFactory.create(TokenStream input) |
TokenStream |
ApostropheFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
ApostropheFilter(TokenStream in) |
TurkishLowerCaseFilter(TokenStream in)
Create a new TurkishLowerCaseFilter, that normalizes Turkish token text
to lower case.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseUIMATokenizer
Abstract base implementation of a
Tokenizer which is able to analyze the given input with a
UIMA AnalysisEngine |
class |
UIMAAnnotationsTokenizer
a
Tokenizer which creates tokens from UIMA Annotations |
class |
UIMATypeAwareAnnotationsTokenizer
A
Tokenizer which creates token from UIMA Annotations filling also their TypeAttribute according to
FeaturePath s specified |
Modifier and Type | Class and Description |
---|---|
class |
CharTokenizer
An abstract base class for simple, character-oriented tokenizers.
|
class |
ElisionFilter
Removes elisions from a
TokenStream . |
class |
FilteringTokenFilter
Abstract base class for TokenFilters that may remove tokens.
|
class |
Lucene43FilteringTokenFilter
Deprecated.
|
class |
SegmentingTokenizerBase
Breaks text into sentences with a
BreakIterator and
allows subclasses to decompose these sentences into words. |
Modifier and Type | Method and Description |
---|---|
abstract TokenStream |
TokenFilterFactory.create(TokenStream input)
Transform the specified input TokenStream
|
Modifier and Type | Method and Description |
---|---|
abstract TokenStream |
TokenFilterFactory.create(TokenStream input)
Transform the specified input TokenStream
|
ElisionFilter |
ElisionFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
ElisionFilter(TokenStream input,
CharArraySet articles)
Constructs an elision filter with a Set of stop words
|
FilteringTokenFilter(TokenStream in)
Create a new
FilteringTokenFilter . |
Lucene43FilteringTokenFilter(boolean enablePositionIncrements,
TokenStream input)
Deprecated.
|
Modifier and Type | Class and Description |
---|---|
class |
WikipediaTokenizer
Extension of StandardTokenizer that is aware of Wikipedia syntax.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
StoredFieldsWriter.MergeVisitor.tokenStream(Analyzer analyzer,
TokenStream reuse) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
StoredFieldsWriter.MergeVisitor.tokenStream(Analyzer analyzer,
TokenStream reuse) |
Modifier and Type | Field and Description |
---|---|
protected TokenStream |
Field.tokenStream
Pre-analyzed tokenStream for indexed fields; this is
separate from fieldsData because you are allowed to
have both; eg maybe field has a String value but you
customize how it's tokenized
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
LazyDocument.LazyField.tokenStream(Analyzer analyzer,
TokenStream reuse) |
TokenStream |
Field.tokenStream(Analyzer analyzer,
TokenStream reuse) |
TokenStream |
Field.tokenStreamValue()
The TokenStream for this field to be used when indexing, or null.
|
Modifier and Type | Method and Description |
---|---|
void |
Field.setTokenStream(TokenStream tokenStream)
Expert: sets the token stream to be used for indexing and causes
isIndexed() and isTokenized() to return true.
|
TokenStream |
LazyDocument.LazyField.tokenStream(Analyzer analyzer,
TokenStream reuse) |
TokenStream |
Field.tokenStream(Analyzer analyzer,
TokenStream reuse) |
Constructor and Description |
---|
Field(String name,
TokenStream tokenStream)
Deprecated.
Use
TextField instead |
Field(String name,
TokenStream tokenStream,
Field.TermVector termVector)
Deprecated.
Use
TextField instead |
Field(String name,
TokenStream tokenStream,
FieldType type)
Create field with TokenStream value.
|
TextField(String name,
TokenStream stream)
Creates a new un-stored TextField with TokenStream value.
|
Modifier and Type | Class and Description |
---|---|
static class |
BaseTermVectorsFormatTestCase.RandomTokenStream
Produces a random TokenStream based off of provided terms.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
IndexableField.tokenStream(Analyzer analyzer,
TokenStream reuse)
Creates the TokenStream used for indexing this field.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
IndexableField.tokenStream(Analyzer analyzer,
TokenStream reuse)
Creates the TokenStream used for indexing this field.
|
Modifier and Type | Method and Description |
---|---|
<T> TokenStream |
MemoryIndex.keywordTokenStream(Collection<T> keywords)
Convenience method; Creates and returns a token stream that generates a
token for each keyword in the given collection, "as is", without any
transforming text analysis.
|
Modifier and Type | Method and Description |
---|---|
void |
MemoryIndex.addField(String fieldName,
TokenStream stream)
Equivalent to
addField(fieldName, stream, 1.0f) . |
void |
MemoryIndex.addField(String fieldName,
TokenStream stream,
float boost)
Iterates over the given token stream and adds the resulting terms to the index;
Equivalent to adding a tokenized, indexed, termVectorStored, unstored,
Lucene
Field . |
void |
MemoryIndex.addField(String fieldName,
TokenStream stream,
float boost,
int positionIncrementGap)
Iterates over the given token stream and adds the resulting terms to the index;
Equivalent to adding a tokenized, indexed, termVectorStored, unstored,
Lucene
Field . |
void |
MemoryIndex.addField(String fieldName,
TokenStream tokenStream,
float boost,
int positionIncrementGap,
int offsetGap)
Iterates over the given token stream and adds the resulting terms to the index;
Equivalent to adding a tokenized, indexed, termVectorStored, unstored,
Lucene
Field . |
Modifier and Type | Method and Description |
---|---|
TermAutomatonQuery |
TokenStreamToTermAutomatonQuery.toQuery(String field,
TokenStream in)
Pulls the graph (including
PositionLengthAttribute ) from the provided TokenStream , and creates the corresponding
automaton where arcs are bytes (or Unicode code points
if unicodeArcs = true) from each term. |
Modifier and Type | Class and Description |
---|---|
class |
OffsetLimitTokenFilter
This TokenFilter limits the number of tokens while indexing by adding up the
current offset.
|
class |
TokenStreamFromTermVector
TokenStream created from a term vector field.
|
Modifier and Type | Method and Description |
---|---|
static TokenStream |
TokenSources.getAnyTokenStream(IndexReader reader,
int docId,
String field,
Analyzer analyzer)
Deprecated.
|
static TokenStream |
TokenSources.getAnyTokenStream(IndexReader reader,
int docId,
String field,
Document document,
Analyzer analyzer)
Deprecated.
|
static TokenStream |
TokenSources.getTermVectorTokenStreamOrNull(String field,
Fields tvFields,
int maxStartOffset)
Get a token stream by un-inverting the term vector.
|
TokenStream |
WeightedSpanTermExtractor.getTokenStream()
Returns the tokenStream which may have been wrapped in a CachingTokenFilter.
|
static TokenStream |
TokenSources.getTokenStream(Document doc,
String field,
Analyzer analyzer)
Deprecated.
|
static TokenStream |
TokenSources.getTokenStream(IndexReader reader,
int docId,
String field,
Analyzer analyzer)
Deprecated.
|
static TokenStream |
TokenSources.getTokenStream(String field,
Fields tvFields,
String text,
Analyzer analyzer,
int maxStartOffset)
Get a token stream from either un-inverting a term vector if possible, or by analyzing the text.
|
static TokenStream |
TokenSources.getTokenStream(String field,
String contents,
Analyzer analyzer)
Deprecated.
|
static TokenStream |
TokenSources.getTokenStream(Terms tpv)
Deprecated.
|
static TokenStream |
TokenSources.getTokenStream(Terms vector,
boolean tokenPositionsGuaranteedContiguous)
Deprecated.
|
static TokenStream |
TokenSources.getTokenStreamWithOffsets(IndexReader reader,
int docId,
String field)
Deprecated.
|
TokenStream |
QueryScorer.init(TokenStream tokenStream) |
TokenStream |
QueryTermScorer.init(TokenStream tokenStream) |
TokenStream |
Scorer.init(TokenStream tokenStream)
Called to init the Scorer with a
TokenStream . |
Modifier and Type | Method and Description |
---|---|
String |
Highlighter.getBestFragment(TokenStream tokenStream,
String text)
Highlights chosen terms in a text, extracting the most relevant section.
|
String[] |
Highlighter.getBestFragments(TokenStream tokenStream,
String text,
int maxNumFragments)
Highlights chosen terms in a text, extracting the most relevant sections.
|
String |
Highlighter.getBestFragments(TokenStream tokenStream,
String text,
int maxNumFragments,
String separator)
Highlights terms in the text , extracting the most relevant sections
and concatenating the chosen fragments with a separator (typically "...").
|
TextFragment[] |
Highlighter.getBestTextFragments(TokenStream tokenStream,
String text,
boolean mergeContiguousFragments,
int maxNumFragments)
Low level api to get the most relevant (formatted) sections of the document.
|
Map<String,WeightedSpanTerm> |
WeightedSpanTermExtractor.getWeightedSpanTerms(Query query,
TokenStream tokenStream)
Creates a Map of
WeightedSpanTerms from the given Query and TokenStream . |
Map<String,WeightedSpanTerm> |
WeightedSpanTermExtractor.getWeightedSpanTerms(Query query,
TokenStream tokenStream,
String fieldName)
Creates a Map of
WeightedSpanTerms from the given Query and TokenStream . |
Map<String,WeightedSpanTerm> |
WeightedSpanTermExtractor.getWeightedSpanTermsWithScores(Query query,
TokenStream tokenStream,
String fieldName,
IndexReader reader)
Creates a Map of
WeightedSpanTerms from the given Query and TokenStream . |
TokenStream |
QueryScorer.init(TokenStream tokenStream) |
TokenStream |
QueryTermScorer.init(TokenStream tokenStream) |
TokenStream |
Scorer.init(TokenStream tokenStream)
Called to init the Scorer with a
TokenStream . |
void |
Fragmenter.start(String originalText,
TokenStream tokenStream)
Initializes the Fragmenter.
|
void |
SimpleSpanFragmenter.start(String originalText,
TokenStream tokenStream) |
void |
SimpleFragmenter.start(String originalText,
TokenStream stream) |
void |
NullFragmenter.start(String s,
TokenStream tokenStream) |
Constructor and Description |
---|
OffsetLimitTokenFilter(TokenStream input,
int offsetLimit) |
TokenGroup(TokenStream tokenStream) |
Modifier and Type | Class and Description |
---|---|
class |
SuggestStopFilter
Like
StopFilter except it will not remove the
last token if that token was not followed by some token
separator. |
Modifier and Type | Method and Description |
---|---|
TokenStream |
SuggestStopFilterFactory.create(TokenStream input) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
SuggestStopFilterFactory.create(TokenStream input) |
Constructor and Description |
---|
SuggestStopFilter(TokenStream input,
CharArraySet stopWords)
Sole constructor.
|
Modifier and Type | Class and Description |
---|---|
class |
CompletionTokenStream
Token stream which converts a provided token stream to an automaton.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
SuggestField.tokenStream(Analyzer analyzer,
TokenStream reuse) |
Modifier and Type | Method and Description |
---|---|
TokenStream |
SuggestField.tokenStream(Analyzer analyzer,
TokenStream reuse) |
protected CompletionTokenStream |
ContextSuggestField.wrapTokenStream(TokenStream stream) |
protected CompletionTokenStream |
SuggestField.wrapTokenStream(TokenStream stream)
Wraps a
stream with a CompletionTokenStream. |
Copyright © 2000–2015 The Apache Software Foundation. All rights reserved.