X-Git-Url: https://git.openstreetmap.org./nominatim.git/blobdiff_plain/f3c9578bcaf8a1981b160b14809e9dc1377cfb37..84d6b481ae58bf3e998959eb74639e24e99341ea:/docs/customize/Tokenizers.md?ds=inline diff --git a/docs/customize/Tokenizers.md b/docs/customize/Tokenizers.md index f75bc6a5..2c7b6878 100644 --- a/docs/customize/Tokenizers.md +++ b/docs/customize/Tokenizers.md @@ -19,7 +19,22 @@ they can be configured. The legacy tokenizer implements the analysis algorithms of older Nominatim versions. It uses a special Postgresql module to normalize names and queries. -This tokenizer is currently the default. +This tokenizer is automatically installed and used when upgrading an older +database. It should not be used for new installations anymore. + +### Compiling the PostgreSQL module + +The tokeinzer needs a special C module for PostgreSQL which is not compiled +by default. If you need the legacy tokenizer, compile Nominatim as follows: + +``` +mkdir build +cd build +cmake -DBUILD_MODULE=on +make +``` + +### Enabling the tokenizer To enable the tokenizer add the following line to your project configuration: @@ -47,6 +62,7 @@ normalization functions are hard-coded. The ICU tokenizer uses the [ICU library](http://site.icu-project.org/) to normalize names and queries. It also offers configurable decomposition and abbreviation handling. +This tokenizer is currently the default. To enable the tokenizer add the following line to your project configuration: @@ -86,7 +102,7 @@ Here is an example configuration file: ``` yaml normalization: - ":: lower ()" - - "ß > 'ss'" # German szet is unimbigiously equal to double ss + - "ß > 'ss'" # German szet is unambiguously equal to double ss transliteration: - !include /etc/nominatim/icu-rules/extended-unicode-to-asccii.yaml - ":: Ascii ()" @@ -112,7 +128,7 @@ The configuration file contains four sections: The normalization and transliteration sections each define a set of ICU rules that are applied to the names. -The **normalisation** rules are applied after sanitation. They should remove +The **normalization** rules are applied after sanitation. They should remove any information that is not relevant for search at all. Usual rules to be applied here are: lower-casing, removing of special characters, cleanup of spaces. @@ -160,35 +176,66 @@ The following is a list of sanitizers that are shipped with Nominatim. ##### split-name-list ::: nominatim.tokenizer.sanitizers.split_name_list - selection: + options: members: False - rendering: heading_level: 6 + docstring_section_style: spacy ##### strip-brace-terms ::: nominatim.tokenizer.sanitizers.strip_brace_terms - selection: + options: members: False - rendering: heading_level: 6 + docstring_section_style: spacy ##### tag-analyzer-by-language ::: nominatim.tokenizer.sanitizers.tag_analyzer_by_language - selection: + options: members: False - rendering: heading_level: 6 + docstring_section_style: spacy ##### clean-housenumbers ::: nominatim.tokenizer.sanitizers.clean_housenumbers - selection: + options: + members: False + heading_level: 6 + docstring_section_style: spacy + +##### clean-postcodes + +::: nominatim.tokenizer.sanitizers.clean_postcodes + options: + members: False + heading_level: 6 + docstring_section_style: spacy + +##### clean-tiger-tags + +::: nominatim.tokenizer.sanitizers.clean_tiger_tags + options: + members: False + heading_level: 6 + docstring_section_style: spacy + +#### delete-tags + +::: nominatim.tokenizer.sanitizers.delete_tags + options: members: False - rendering: heading_level: 6 + docstring_section_style: spacy + +#### tag-japanese +::: nominatim.tokenizer.sanitizers.tag_japanese + options: + members: False + heading_level: 6 + docstring_section_style: spacy #### Token Analysis @@ -206,15 +253,20 @@ by a sanitizer (see for example the The token-analysis section contains the list of configured analyzers. Each analyzer must have an `id` parameter that uniquely identifies the analyzer. The only exception is the default analyzer that is used when no special -analyzer was selected. +analyzer was selected. There are analysers with special ids: + + * '@housenumber'. If an analyzer with that name is present, it is used + for normalization of house numbers. + * '@potcode'. If an analyzer with that name is present, it is used + for normalization of postcodes. Different analyzer implementations may exist. To select the implementation, -the `analyzer` parameter must be set. Currently there is only one implementation -`generic` which is described in the following. +the `analyzer` parameter must be set. The different implementations are +described in the following. ##### Generic token analyzer -The generic analyzer is able to create variants from a list of given +The generic analyzer `generic` is able to create variants from a list of given abbreviation and decomposition replacements and introduce spelling variations. ###### Variants @@ -331,6 +383,22 @@ the mode by adding: to the analyser configuration. +##### Housenumber token analyzer + +The analyzer `housenumbers` is purpose-made to analyze house numbers. It +creates variants with optional spaces between numbers and letters. Thus, +house numbers of the form '3 a', '3A', '3-A' etc. are all considered equivalent. + +The analyzer cannot be customized. + +##### Postcode token analyzer + +The analyzer `postcodes` is pupose-made to analyze postcodes. It supports +a 'lookup' varaint of the token, which produces variants with optional +spaces. Use together with the clean-postcodes sanitizer. + +The analyzer cannot be customized. + ### Reconfiguration Changing the configuration after the import is currently not possible, although