Package org.apache.lucene.analysis
Class LimitTokenCountAnalyzer
- java.lang.Object
-
- org.apache.lucene.analysis.Analyzer
-
- org.apache.lucene.analysis.LimitTokenCountAnalyzer
-
- All Implemented Interfaces:
Closeable
,AutoCloseable
public final class LimitTokenCountAnalyzer extends Analyzer
This Analyzer limits the number of tokens while indexing. It is a replacement for the maximum field length setting insideIndexWriter
.
-
-
Constructor Summary
Constructors Constructor Description LimitTokenCountAnalyzer(Analyzer delegate, int maxTokenCount)
Build an analyzer that limits the maximum number of tokens per field.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description int
getOffsetGap(Fieldable field)
Just likeAnalyzer.getPositionIncrementGap(java.lang.String)
, except for Token offsets instead.int
getPositionIncrementGap(String fieldName)
Invoked before indexing a Fieldable instance if terms have already been added to that field.TokenStream
reusableTokenStream(String fieldName, Reader reader)
Creates a TokenStream that is allowed to be re-used from the previous time that the same thread called this method.TokenStream
tokenStream(String fieldName, Reader reader)
Creates a TokenStream which tokenizes all the text in the provided Reader.String
toString()
-
Methods inherited from class org.apache.lucene.analysis.Analyzer
close, getPreviousTokenStream, setPreviousTokenStream
-
-
-
-
Constructor Detail
-
LimitTokenCountAnalyzer
public LimitTokenCountAnalyzer(Analyzer delegate, int maxTokenCount)
Build an analyzer that limits the maximum number of tokens per field.
-
-
Method Detail
-
tokenStream
public TokenStream tokenStream(String fieldName, Reader reader)
Description copied from class:Analyzer
Creates a TokenStream which tokenizes all the text in the provided Reader. Must be able to handle null field name for backward compatibility.- Specified by:
tokenStream
in classAnalyzer
-
reusableTokenStream
public TokenStream reusableTokenStream(String fieldName, Reader reader) throws IOException
Description copied from class:Analyzer
Creates a TokenStream that is allowed to be re-used from the previous time that the same thread called this method. Callers that do not need to use more than one TokenStream at the same time from this analyzer should use this method for better performance.- Overrides:
reusableTokenStream
in classAnalyzer
- Throws:
IOException
-
getPositionIncrementGap
public int getPositionIncrementGap(String fieldName)
Description copied from class:Analyzer
Invoked before indexing a Fieldable instance if terms have already been added to that field. This allows custom analyzers to place an automatic position increment gap between Fieldable instances using the same field name. The default value position increment gap is 0. With a 0 position increment gap and the typical default token position increment of 1, all terms in a field, including across Fieldable instances, are in successive positions, allowing exact PhraseQuery matches, for instance, across Fieldable instance boundaries.- Overrides:
getPositionIncrementGap
in classAnalyzer
- Parameters:
fieldName
- Fieldable name being indexed.- Returns:
- position increment gap, added to the next token emitted from
Analyzer.tokenStream(String,Reader)
-
getOffsetGap
public int getOffsetGap(Fieldable field)
Description copied from class:Analyzer
Just likeAnalyzer.getPositionIncrementGap(java.lang.String)
, except for Token offsets instead. By default this returns 1 for tokenized fields and, as if the fields were joined with an extra space character, and 0 for un-tokenized fields. This method is only called if the field produced at least one token for indexing.- Overrides:
getOffsetGap
in classAnalyzer
- Parameters:
field
- the field just indexed- Returns:
- offset gap, added to the next token emitted from
Analyzer.tokenStream(String,Reader)
-
-