Package | Description |
---|---|
org.apache.lucene.analysis |
Text analysis.
|
org.apache.lucene.analysis.cn.smart |
Analyzer for Simplified Chinese, which indexes words.
|
org.apache.lucene.analysis.core |
Basic, general-purpose analysis components.
|
org.apache.lucene.analysis.ngram |
Character n-gram tokenizers and filters.
|
org.apache.lucene.analysis.path |
Analysis components for path-like strings such as filenames.
|
org.apache.lucene.analysis.pattern |
Set of components for pattern-based (regex) analysis.
|
org.apache.lucene.analysis.standard |
Fast, general-purpose grammar-based tokenizer
StandardTokenizer
implements the Word Break rules from the Unicode Text Segmentation algorithm, as specified in
Unicode Standard Annex #29. |
org.apache.lucene.analysis.th |
Analyzer for Thai.
|
org.apache.lucene.analysis.util |
Utility functions for text analysis.
|
org.apache.lucene.analysis.wikipedia |
Tokenizer that is aware of Wikipedia syntax.
|
Constructor and Description |
---|
TokenStreamComponents(Tokenizer tokenizer)
Creates a new
Analyzer.TokenStreamComponents from a Tokenizer |
TokenStreamComponents(Tokenizer tokenizer,
TokenStream result)
Creates a new
Analyzer.TokenStreamComponents instance |
Modifier and Type | Class and Description |
---|---|
class |
HMMChineseTokenizer
Tokenizer for Chinese or mixed Chinese-English text.
|
Modifier and Type | Method and Description |
---|---|
Tokenizer |
HMMChineseTokenizerFactory.create(AttributeFactory factory) |
Modifier and Type | Class and Description |
---|---|
class |
KeywordTokenizer
Emits the entire input as a single token.
|
class |
LetterTokenizer
A LetterTokenizer is a tokenizer that divides text at non-letters.
|
class |
UnicodeWhitespaceTokenizer
A UnicodeWhitespaceTokenizer is a tokenizer that divides text at whitespace.
|
class |
WhitespaceTokenizer
A tokenizer that divides text at whitespace characters as defined by
Character.isWhitespace(int) . |
Modifier and Type | Method and Description |
---|---|
Tokenizer |
WhitespaceTokenizerFactory.create(AttributeFactory factory) |
Modifier and Type | Class and Description |
---|---|
class |
EdgeNGramTokenizer
Tokenizes the input from an edge into n-grams of given size(s).
|
class |
NGramTokenizer
Tokenizes the input into n-grams of the given size(s).
|
Modifier and Type | Method and Description |
---|---|
Tokenizer |
NGramTokenizerFactory.create(AttributeFactory factory)
|
Tokenizer |
EdgeNGramTokenizerFactory.create(AttributeFactory factory) |
Modifier and Type | Class and Description |
---|---|
class |
PathHierarchyTokenizer
Tokenizer for path-like hierarchies.
|
class |
ReversePathHierarchyTokenizer
Tokenizer for domain-like hierarchies.
|
Modifier and Type | Method and Description |
---|---|
Tokenizer |
PathHierarchyTokenizerFactory.create(AttributeFactory factory) |
Modifier and Type | Class and Description |
---|---|
class |
PatternTokenizer
This tokenizer uses regex pattern matching to construct distinct tokens
for the input stream.
|
class |
SimplePatternSplitTokenizer
|
class |
SimplePatternTokenizer
|
Modifier and Type | Class and Description |
---|---|
class |
ClassicTokenizer
A grammar-based tokenizer constructed with JFlex
|
class |
StandardTokenizer
A grammar-based tokenizer constructed with JFlex.
|
class |
UAX29URLEmailTokenizer
This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
Unicode Standard Annex #29
URLs and email addresses are also tokenized according to the relevant RFCs.
|
Modifier and Type | Class and Description |
---|---|
class |
ThaiTokenizer
Tokenizer that use
BreakIterator to tokenize Thai text. |
Modifier and Type | Method and Description |
---|---|
Tokenizer |
ThaiTokenizerFactory.create(AttributeFactory factory) |
Modifier and Type | Class and Description |
---|---|
class |
CharTokenizer
An abstract base class for simple, character-oriented tokenizers.
|
class |
SegmentingTokenizerBase
Breaks text into sentences with a
BreakIterator and
allows subclasses to decompose these sentences into words. |
Modifier and Type | Method and Description |
---|---|
Tokenizer |
TokenizerFactory.create()
Creates a TokenStream of the specified input using the default attribute factory.
|
abstract Tokenizer |
TokenizerFactory.create(AttributeFactory factory)
Creates a TokenStream of the specified input using the given AttributeFactory
|
Modifier and Type | Class and Description |
---|---|
class |
WikipediaTokenizer
Extension of StandardTokenizer that is aware of Wikipedia syntax.
|