to OSM objects and the terms of an incoming query in order to make sure, they
can be matched appropriately.
-Nominatim offers different tokenizer modules, which behave differently and have
-different configuration options. This sections describes the tokenizers and how
-they can be configured.
+Nominatim currently offers only one tokenizer module, the ICU tokenizer. This section
+describes the tokenizer and how it can be configured.
!!! important
- The use of a tokenizer is tied to a database installation. You need to choose
+ The selection of tokenizer is tied to a database installation. You need to choose
and configure the tokenizer before starting the initial import. Once the import
is done, you cannot switch to another tokenizer anymore. Reconfiguring the
chosen tokenizer is very limited as well. See the comments in each tokenizer
section.
-## Legacy tokenizer
-
-The legacy tokenizer implements the analysis algorithms of older Nominatim
-versions. It uses a special Postgresql module to normalize names and queries.
-This tokenizer is currently the default.
-
-To enable the tokenizer add the following line to your project configuration:
-
-```
-NOMINATIM_TOKENIZER=legacy
-```
-
-The Postgresql module for the tokenizer is available in the `module` directory
-and also installed with the remainder of the software under
-`lib/nominatim/module/nominatim.so`. You can specify a custom location for
-the module with
-
-```
-NOMINATIM_DATABASE_MODULE_PATH=<path to directory where nominatim.so resides>
-```
-
-This is in particular useful when the database runs on a different server.
-See [Advanced installations](../admin/Advanced-Installations.md#importing-nominatim-to-an-external-postgresql-database) for details.
-
-There are no other configuration options for the legacy tokenizer. All
-normalization functions are hard-coded.
-
## ICU tokenizer
The ICU tokenizer uses the [ICU library](http://site.icu-project.org/) to
normalize names and queries. It also offers configurable decomposition and
abbreviation handling.
+This tokenizer is currently the default.
To enable the tokenizer add the following line to your project configuration:
See the [Token analysis](#token-analysis) section below for more
information.
-During query time, only normalization and transliteration are relevant.
-An incoming query is first split into name chunks (this usually means splitting
-the string at the commas) and the each part is normalised and transliterated.
-The result is used to look up places in the search index.
+During query time, the tokeinzer is responsible for processing incoming
+queries. This happens in two stages:
+
+1. During **query preprocessing** the incoming text is split into name
+ chunks and normalised. This usually means applying the same normalisation
+ as during the import process but may involve other processing like,
+ for example, word break detection.
+2. The **token analysis** step breaks down the query parts into tokens,
+ looks them up in the database and assignes them possible functions and
+ probabilities.
+
+Query processing can be further customized while the rest of the analysis
+is hard-coded.
### Configuration
Here is an example configuration file:
``` yaml
+query-preprocessing:
+ - normalize
normalization:
- ":: lower ()"
- - "ß > 'ss'" # German szet is unimbigiously equal to double ss
+ - "ß > 'ss'" # German szet is unambiguously equal to double ss
transliteration:
- !include /etc/nominatim/icu-rules/extended-unicode-to-asccii.yaml
- ":: Ascii ()"
The configuration file contains four sections:
`normalization`, `transliteration`, `sanitizers` and `token-analysis`.
+#### Query preprocessing
+
+The section for `query-preprocessing` defines an ordered list of functions
+that are applied to the query before the token analysis.
+
+The following is a list of preprocessors that are shipped with Nominatim.
+
+##### normalize
+
+::: nominatim_api.query_preprocessing.normalize
+ options:
+ members: False
+ heading_level: 6
+ docstring_section_style: spacy
+
+
#### Normalization and Transliteration
The normalization and transliteration sections each define a set of
ICU rules that are applied to the names.
-The **normalisation** rules are applied after sanitation. They should remove
+The **normalization** rules are applied after sanitation. They should remove
any information that is not relevant for search at all. Usual rules to be
applied here are: lower-casing, removing of special characters, cleanup of
spaces.
##### split-name-list
-::: nominatim.tokenizer.sanitizers.split_name_list
- selection:
+::: nominatim_db.tokenizer.sanitizers.split_name_list
+ options:
members: False
- rendering:
heading_level: 6
+ docstring_section_style: spacy
##### strip-brace-terms
-::: nominatim.tokenizer.sanitizers.strip_brace_terms
- selection:
+::: nominatim_db.tokenizer.sanitizers.strip_brace_terms
+ options:
members: False
- rendering:
heading_level: 6
+ docstring_section_style: spacy
##### tag-analyzer-by-language
-::: nominatim.tokenizer.sanitizers.tag_analyzer_by_language
- selection:
+::: nominatim_db.tokenizer.sanitizers.tag_analyzer_by_language
+ options:
members: False
- rendering:
heading_level: 6
+ docstring_section_style: spacy
##### clean-housenumbers
-::: nominatim.tokenizer.sanitizers.clean_housenumbers
- selection:
+::: nominatim_db.tokenizer.sanitizers.clean_housenumbers
+ options:
+ members: False
+ heading_level: 6
+ docstring_section_style: spacy
+
+##### clean-postcodes
+
+::: nominatim_db.tokenizer.sanitizers.clean_postcodes
+ options:
+ members: False
+ heading_level: 6
+ docstring_section_style: spacy
+
+##### clean-tiger-tags
+
+::: nominatim_db.tokenizer.sanitizers.clean_tiger_tags
+ options:
+ members: False
+ heading_level: 6
+ docstring_section_style: spacy
+
+#### delete-tags
+
+::: nominatim_db.tokenizer.sanitizers.delete_tags
+ options:
members: False
- rendering:
heading_level: 6
+ docstring_section_style: spacy
+#### tag-japanese
+
+::: nominatim_db.tokenizer.sanitizers.tag_japanese
+ options:
+ members: False
+ heading_level: 6
+ docstring_section_style: spacy
#### Token Analysis
The token-analysis section contains the list of configured analyzers. Each
analyzer must have an `id` parameter that uniquely identifies the analyzer.
The only exception is the default analyzer that is used when no special
-analyzer was selected. There is one special id '@housenumber'. If an analyzer
-with that name is present, it is used for normalization of house numbers.
+analyzer was selected. There are analysers with special ids:
+
+ * '@housenumber'. If an analyzer with that name is present, it is used
+ for normalization of house numbers.
+ * '@potcode'. If an analyzer with that name is present, it is used
+ for normalization of postcodes.
Different analyzer implementations may exist. To select the implementation,
the `analyzer` parameter must be set. The different implementations are
The analyzer cannot be customized.
+##### Postcode token analyzer
+
+The analyzer `postcodes` is pupose-made to analyze postcodes. It supports
+a 'lookup' variant of the token, which produces variants with optional
+spaces. Use together with the clean-postcodes sanitizer.
+
+The analyzer cannot be customized.
+
### Reconfiguration
Changing the configuration after the import is currently not possible, although