public final class SimplePatternSplitTokenizer extends Tokenizer
RegExp
or (expert usage) a pre-built determinized Automaton
, to locate tokens.
The regexp syntax is more limited than PatternTokenizer
, but the tokenization is quite a bit faster. This is just
like SimplePatternTokenizer
except that the pattern should make valid token separator characters, like
String.split
. Empty string tokens are never produced.AttributeSource.State
Modifier and Type | Field and Description |
---|---|
private char[] |
buffer |
private int |
bufferLimit |
private int |
bufferNextRead |
private int |
offset |
private OffsetAttribute |
offsetAtt |
private char[] |
pendingChars |
private int |
pendingLimit |
private int |
pendingUpto |
private CharacterRunAutomaton |
runDFA |
private int |
sepUpto |
private CharTermAttribute |
termAtt |
private int |
tokenUpto |
DEFAULT_TOKEN_ATTRIBUTE_FACTORY
Constructor and Description |
---|
SimplePatternSplitTokenizer(AttributeFactory factory,
Automaton dfa)
Runs a pre-built automaton.
|
SimplePatternSplitTokenizer(AttributeFactory factory,
java.lang.String regexp,
int maxDeterminizedStates)
See
RegExp for the accepted syntax. |
SimplePatternSplitTokenizer(Automaton dfa)
Runs a pre-built automaton.
|
SimplePatternSplitTokenizer(java.lang.String regexp)
See
RegExp for the accepted syntax. |
Modifier and Type | Method and Description |
---|---|
private void |
appendToToken(char ch) |
void |
end()
This method is called by the consumer after the last token has been
consumed, after
TokenStream.incrementToken() returned false
(using the new TokenStream API). |
private void |
fillToken(int offsetStart) |
boolean |
incrementToken()
Consumers (i.e.,
IndexWriter ) use this method to advance the stream to
the next token. |
private int |
nextCodePoint() |
private int |
nextCodeUnit() |
private void |
pushBack(int count)
Pushes back the last
count characters in current token's buffer. |
void |
reset()
This method is called by a consumer before it begins consumption using
TokenStream.incrementToken() . |
close, correctOffset, setReader
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, endAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, removeAllAttributes, restoreState, toString
private final CharTermAttribute termAtt
private final OffsetAttribute offsetAtt
private final CharacterRunAutomaton runDFA
private char[] pendingChars
private int tokenUpto
private int pendingLimit
private int pendingUpto
private int offset
private int sepUpto
private final char[] buffer
private int bufferLimit
private int bufferNextRead
public SimplePatternSplitTokenizer(java.lang.String regexp)
RegExp
for the accepted syntax.public SimplePatternSplitTokenizer(Automaton dfa)
public SimplePatternSplitTokenizer(AttributeFactory factory, java.lang.String regexp, int maxDeterminizedStates)
RegExp
for the accepted syntax.public SimplePatternSplitTokenizer(AttributeFactory factory, Automaton dfa)
private void fillToken(int offsetStart)
public boolean incrementToken() throws java.io.IOException
TokenStream
IndexWriter
) use this method to advance the stream to
the next token. Implementing classes must implement this method and update
the appropriate AttributeImpl
s with the attributes of the next
token.
The producer must make no assumptions about the attributes after the method
has been returned: the caller may arbitrarily change it. If the producer
needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()
to create a copy of the current attribute state.
This method is called for every token of a document, so an efficient
implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)
and AttributeSource.getAttribute(Class)
,
references to all AttributeImpl
s that this stream uses should be
retrieved during instantiation.
To ensure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers
are not required to check for availability of attributes in
TokenStream.incrementToken()
.
incrementToken
in class TokenStream
java.io.IOException
public void end() throws java.io.IOException
TokenStream
TokenStream.incrementToken()
returned false
(using the new TokenStream
API). Streams implementing the old API
should upgrade to use this feature.
This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.
Additionally any skipped positions (such as those removed by a stopfilter) can be applied to the position increment, or any adjustment of other attributes where the end-of-stream value may be important.
If you override this method, always call super.end()
.
end
in class TokenStream
java.io.IOException
- If an I/O error occurspublic void reset() throws java.io.IOException
TokenStream
TokenStream.incrementToken()
.
Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call super.reset()
, otherwise
some internal state will not be correctly reset (e.g., Tokenizer
will
throw IllegalStateException
on further usage).
private void pushBack(int count)
count
characters in current token's buffer.private void appendToToken(char ch)
private int nextCodeUnit() throws java.io.IOException
java.io.IOException
private int nextCodePoint() throws java.io.IOException
java.io.IOException