chosen tokenizer is very limited as well. See the comments in each tokenizer
section.
-## Legacy tokenizer
-
-The legacy tokenizer implements the analysis algorithms of older Nominatim
-versions. It uses a special Postgresql module to normalize names and queries.
-This tokenizer is currently the default.
-
-To enable the tokenizer add the following line to your project configuration:
-
-```
-NOMINATIM_TOKENIZER=legacy
-```
-
-The Postgresql module for the tokenizer is available in the `module` directory
-and also installed with the remainder of the software under
-`lib/nominatim/module/nominatim.so`. You can specify a custom location for
-the module with
-
-```
-NOMINATIM_DATABASE_MODULE_PATH=<path to directory where nominatim.so resides>
-```
-
-This is in particular useful when the database runs on a different server.
-See [Advanced installations](../admin/Advanced-Installations.md#importing-nominatim-to-an-external-postgresql-database) for details.
-
-There are no other configuration options for the legacy tokenizer. All
-normalization functions are hard-coded.
-
## ICU tokenizer
The ICU tokenizer uses the [ICU library](http://site.icu-project.org/) to
normalize names and queries. It also offers configurable decomposition and
abbreviation handling.
+This tokenizer is currently the default.
To enable the tokenizer add the following line to your project configuration:
``` yaml
normalization:
- ":: lower ()"
- - "ß > 'ss'" # German szet is unimbigiously equal to double ss
+ - "ß > 'ss'" # German szet is unambiguously equal to double ss
transliteration:
- !include /etc/nominatim/icu-rules/extended-unicode-to-asccii.yaml
- ":: Ascii ()"
The normalization and transliteration sections each define a set of
ICU rules that are applied to the names.
-The **normalisation** rules are applied after sanitation. They should remove
+The **normalization** rules are applied after sanitation. They should remove
any information that is not relevant for search at all. Usual rules to be
applied here are: lower-casing, removing of special characters, cleanup of
spaces.
##### split-name-list
-::: nominatim.tokenizer.sanitizers.split_name_list
- selection:
+::: nominatim_db.tokenizer.sanitizers.split_name_list
+ options:
members: False
- rendering:
heading_level: 6
+ docstring_section_style: spacy
##### strip-brace-terms
-::: nominatim.tokenizer.sanitizers.strip_brace_terms
- selection:
+::: nominatim_db.tokenizer.sanitizers.strip_brace_terms
+ options:
members: False
- rendering:
heading_level: 6
+ docstring_section_style: spacy
##### tag-analyzer-by-language
-::: nominatim.tokenizer.sanitizers.tag_analyzer_by_language
- selection:
+::: nominatim_db.tokenizer.sanitizers.tag_analyzer_by_language
+ options:
+ members: False
+ heading_level: 6
+ docstring_section_style: spacy
+
+##### clean-housenumbers
+
+::: nominatim_db.tokenizer.sanitizers.clean_housenumbers
+ options:
+ members: False
+ heading_level: 6
+ docstring_section_style: spacy
+
+##### clean-postcodes
+
+::: nominatim_db.tokenizer.sanitizers.clean_postcodes
+ options:
+ members: False
+ heading_level: 6
+ docstring_section_style: spacy
+
+##### clean-tiger-tags
+
+::: nominatim_db.tokenizer.sanitizers.clean_tiger_tags
+ options:
+ members: False
+ heading_level: 6
+ docstring_section_style: spacy
+
+#### delete-tags
+
+::: nominatim_db.tokenizer.sanitizers.delete_tags
+ options:
members: False
- rendering:
heading_level: 6
+ docstring_section_style: spacy
+#### tag-japanese
+::: nominatim_db.tokenizer.sanitizers.tag_japanese
+ options:
+ members: False
+ heading_level: 6
+ docstring_section_style: spacy
#### Token Analysis
The token-analysis section contains the list of configured analyzers. Each
analyzer must have an `id` parameter that uniquely identifies the analyzer.
The only exception is the default analyzer that is used when no special
-analyzer was selected.
+analyzer was selected. There are analysers with special ids:
+
+ * '@housenumber'. If an analyzer with that name is present, it is used
+ for normalization of house numbers.
+ * '@potcode'. If an analyzer with that name is present, it is used
+ for normalization of postcodes.
Different analyzer implementations may exist. To select the implementation,
-the `analyzer` parameter must be set. Currently there is only one implementation
-`generic` which is described in the following.
+the `analyzer` parameter must be set. The different implementations are
+described in the following.
##### Generic token analyzer
-The generic analyzer is able to create variants from a list of given
+The generic analyzer `generic` is able to create variants from a list of given
abbreviation and decomposition replacements and introduce spelling variations.
###### Variants
to the analyser configuration.
+##### Housenumber token analyzer
+
+The analyzer `housenumbers` is purpose-made to analyze house numbers. It
+creates variants with optional spaces between numbers and letters. Thus,
+house numbers of the form '3 a', '3A', '3-A' etc. are all considered equivalent.
+
+The analyzer cannot be customized.
+
+##### Postcode token analyzer
+
+The analyzer `postcodes` is pupose-made to analyze postcodes. It supports
+a 'lookup' variant of the token, which produces variants with optional
+spaces. Use together with the clean-postcodes sanitizer.
+
+The analyzer cannot be customized.
+
### Reconfiguration
Changing the configuration after the import is currently not possible, although