public abstract class CompoundWordTokenFilterBase extends TokenFilter
AttributeSource.AttributeFactory, AttributeSource.State
Modifier and Type | Field and Description |
---|---|
static int |
DEFAULT_MAX_SUBWORD_SIZE
The default for maximal length of subwords that get propagated to the output of this filter
|
static int |
DEFAULT_MIN_SUBWORD_SIZE
The default for minimal length of subwords that get propagated to the output of this filter
|
static int |
DEFAULT_MIN_WORD_SIZE
The default for minimal word length that gets decomposed
|
protected CharArraySet |
dictionary |
protected int |
maxSubwordSize |
protected int |
minSubwordSize |
protected int |
minWordSize |
protected boolean |
onlyLongestMatch |
protected java.util.LinkedList |
tokens |
input
Modifier | Constructor and Description |
---|---|
protected |
CompoundWordTokenFilterBase(TokenStream input,
java.util.Set dictionary) |
protected |
CompoundWordTokenFilterBase(TokenStream input,
java.util.Set dictionary,
boolean onlyLongestMatch) |
protected |
CompoundWordTokenFilterBase(TokenStream input,
java.util.Set dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch) |
protected |
CompoundWordTokenFilterBase(TokenStream input,
java.lang.String[] dictionary) |
protected |
CompoundWordTokenFilterBase(TokenStream input,
java.lang.String[] dictionary,
boolean onlyLongestMatch) |
protected |
CompoundWordTokenFilterBase(TokenStream input,
java.lang.String[] dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch) |
Modifier and Type | Method and Description |
---|---|
protected static void |
addAllLowerCase(java.util.Set target,
java.util.Collection col) |
protected Token |
createToken(int offset,
int length,
Token prototype) |
protected void |
decompose(Token token) |
protected abstract void |
decomposeInternal(Token token) |
boolean |
incrementToken()
Consumers (i.e.,
IndexWriter ) use this method to advance the stream to
the next token. |
static java.util.Set |
makeDictionary(java.lang.String[] dictionary)
Create a set of words from an array
The resulting Set does case insensitive matching
TODO We should look for a faster dictionary lookup approach.
|
protected static char[] |
makeLowerCaseCopy(char[] buffer) |
Token |
next()
Deprecated.
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer.
|
Token |
next(Token reusableToken)
Deprecated.
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer.
|
void |
reset()
Reset the filter as well as the input TokenStream.
|
close, end
getOnlyUseNewAPI, setOnlyUseNewAPI
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, restoreState, toString
public static final int DEFAULT_MIN_WORD_SIZE
public static final int DEFAULT_MIN_SUBWORD_SIZE
public static final int DEFAULT_MAX_SUBWORD_SIZE
protected final CharArraySet dictionary
protected final java.util.LinkedList tokens
protected final int minWordSize
protected final int minSubwordSize
protected final int maxSubwordSize
protected final boolean onlyLongestMatch
protected CompoundWordTokenFilterBase(TokenStream input, java.lang.String[] dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
protected CompoundWordTokenFilterBase(TokenStream input, java.lang.String[] dictionary, boolean onlyLongestMatch)
protected CompoundWordTokenFilterBase(TokenStream input, java.util.Set dictionary, boolean onlyLongestMatch)
protected CompoundWordTokenFilterBase(TokenStream input, java.lang.String[] dictionary)
protected CompoundWordTokenFilterBase(TokenStream input, java.util.Set dictionary)
protected CompoundWordTokenFilterBase(TokenStream input, java.util.Set dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
public static final java.util.Set makeDictionary(java.lang.String[] dictionary)
dictionary
- Set
of lowercased termspublic final boolean incrementToken() throws java.io.IOException
TokenStream
IndexWriter
) use this method to advance the stream to
the next token. Implementing classes must implement this method and update
the appropriate AttributeImpl
s with the attributes of the next
token.
The producer must make no assumptions about the attributes after the method
has been returned: the caller may arbitrarily change it. If the producer
needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()
to create a copy of the current attribute state.
This method is called for every token of a document, so an efficient
implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)
and AttributeSource.getAttribute(Class)
or downcasts,
references to all AttributeImpl
s that this stream uses should be
retrieved during instantiation.
To ensure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers
are not required to check for availability of attributes in
TokenStream.incrementToken()
.
incrementToken
in class TokenStream
Note that this method will be defined abstract in Lucene 3.0.
java.io.IOException
public final Token next(Token reusableToken) throws java.io.IOException
TokenStream
This implicitly defines a "contract" between consumers (callers of this method) and producers (implementations of this method that are the source for tokens):
Token
before calling this method again.Token.clear()
before setting the fields in
it and returning itToken
after it
has been returned: the caller may arbitrarily change it. If the producer
needs to hold onto the Token
for subsequent calls, it must clone()
it before storing it. Note that a TokenFilter
is considered a
consumer.next
in class TokenStream
reusableToken
- a Token
that may or may not be used to return;
this parameter should never be null (the callee is not required to
check for null before using it, but it is a good idea to assert that
it is not null.)Token
in the stream or null if end-of-stream was hitjava.io.IOException
public final Token next() throws java.io.IOException
TokenStream
Token
in the stream, or null at EOS.next
in class TokenStream
java.io.IOException
protected static final void addAllLowerCase(java.util.Set target, java.util.Collection col)
protected static char[] makeLowerCaseCopy(char[] buffer)
protected void decompose(Token token)
protected abstract void decomposeInternal(Token token)
public void reset() throws java.io.IOException
TokenFilter
reset
in class TokenFilter
java.io.IOException
Copyright © 2000-2016 Apache Software Foundation. All Rights Reserved.