X-Git-Url: https://git.openstreetmap.org./nominatim.git/blobdiff_plain/751563644fe6572f7c63f67525ae56f4d9133e5c..3460a5c230308aa3c0bea66d2fa8502ce647dc36:/docs/customize/Tokenizers.md diff --git a/docs/customize/Tokenizers.md b/docs/customize/Tokenizers.md index d3d04502..30be170e 100644 --- a/docs/customize/Tokenizers.md +++ b/docs/customize/Tokenizers.md @@ -15,38 +15,12 @@ they can be configured. chosen tokenizer is very limited as well. See the comments in each tokenizer section. -## Legacy tokenizer - -The legacy tokenizer implements the analysis algorithms of older Nominatim -versions. It uses a special Postgresql module to normalize names and queries. -This tokenizer is currently the default. - -To enable the tokenizer add the following line to your project configuration: - -``` -NOMINATIM_TOKENIZER=legacy -``` - -The Postgresql module for the tokenizer is available in the `module` directory -and also installed with the remainder of the software under -`lib/nominatim/module/nominatim.so`. You can specify a custom location for -the module with - -``` -NOMINATIM_DATABASE_MODULE_PATH= -``` - -This is in particular useful when the database runs on a different server. -See [Advanced installations](Advanced-Installations.md#importing-nominatim-to-an-external-postgresql-database) for details. - -There are no other configuration options for the legacy tokenizer. All -normalization functions are hard-coded. - ## ICU tokenizer The ICU tokenizer uses the [ICU library](http://site.icu-project.org/) to normalize names and queries. It also offers configurable decomposition and abbreviation handling. +This tokenizer is currently the default. To enable the tokenizer add the following line to your project configuration: @@ -86,7 +60,7 @@ Here is an example configuration file: ``` yaml normalization: - ":: lower ()" - - "ß > 'ss'" # German szet is unimbigiously equal to double ss + - "ß > 'ss'" # German szet is unambiguously equal to double ss transliteration: - !include /etc/nominatim/icu-rules/extended-unicode-to-asccii.yaml - ":: Ascii ()" @@ -99,6 +73,9 @@ token-analysis: - words: - road -> rd - bridge -> bdge,br,brdg,bri,brg + mutations: + - pattern: 'ä' + replacements: ['ä', 'ae'] ``` The configuration file contains four sections: @@ -109,7 +86,7 @@ The configuration file contains four sections: The normalization and transliteration sections each define a set of ICU rules that are applied to the names. -The **normalisation** rules are applied after sanitation. They should remove +The **normalization** rules are applied after sanitation. They should remove any information that is not relevant for search at all. Usual rules to be applied here are: lower-casing, removing of special characters, cleanup of spaces. @@ -156,29 +133,67 @@ The following is a list of sanitizers that are shipped with Nominatim. ##### split-name-list -::: nominatim.tokenizer.sanitizers.split_name_list - selection: +::: nominatim_db.tokenizer.sanitizers.split_name_list + options: members: False - rendering: heading_level: 6 + docstring_section_style: spacy ##### strip-brace-terms -::: nominatim.tokenizer.sanitizers.strip_brace_terms - selection: +::: nominatim_db.tokenizer.sanitizers.strip_brace_terms + options: members: False - rendering: heading_level: 6 + docstring_section_style: spacy ##### tag-analyzer-by-language -::: nominatim.tokenizer.sanitizers.tag_analyzer_by_language - selection: +::: nominatim_db.tokenizer.sanitizers.tag_analyzer_by_language + options: + members: False + heading_level: 6 + docstring_section_style: spacy + +##### clean-housenumbers + +::: nominatim_db.tokenizer.sanitizers.clean_housenumbers + options: + members: False + heading_level: 6 + docstring_section_style: spacy + +##### clean-postcodes + +::: nominatim_db.tokenizer.sanitizers.clean_postcodes + options: + members: False + heading_level: 6 + docstring_section_style: spacy + +##### clean-tiger-tags + +::: nominatim_db.tokenizer.sanitizers.clean_tiger_tags + options: members: False - rendering: heading_level: 6 + docstring_section_style: spacy +#### delete-tags +::: nominatim_db.tokenizer.sanitizers.delete_tags + options: + members: False + heading_level: 6 + docstring_section_style: spacy + +#### tag-japanese + +::: nominatim_db.tokenizer.sanitizers.tag_japanese + options: + members: False + heading_level: 6 + docstring_section_style: spacy #### Token Analysis @@ -196,21 +211,25 @@ by a sanitizer (see for example the The token-analysis section contains the list of configured analyzers. Each analyzer must have an `id` parameter that uniquely identifies the analyzer. The only exception is the default analyzer that is used when no special -analyzer was selected. +analyzer was selected. There are analysers with special ids: + + * '@housenumber'. If an analyzer with that name is present, it is used + for normalization of house numbers. + * '@potcode'. If an analyzer with that name is present, it is used + for normalization of postcodes. Different analyzer implementations may exist. To select the implementation, -the `analyzer` parameter must be set. Currently there is only one implementation -`generic` which is described in the following. +the `analyzer` parameter must be set. The different implementations are +described in the following. ##### Generic token analyzer -The generic analyzer is able to create variants from a list of given -abbreviation and decomposition replacements. It takes one optional parameter -`variants` which lists the replacements to apply. If the section is -omitted, then the generic analyzer becomes a simple analyzer that only -applies the transliteration. +The generic analyzer `generic` is able to create variants from a list of given +abbreviation and decomposition replacements and introduce spelling variations. + +###### Variants -The variants section defines lists of replacements which create alternative +The optional 'variants' section defines lists of replacements which create alternative spellings of a name. To create the variants, a name is scanned from left to right and the longest matching replacement is applied until the end of the string is reached. @@ -296,6 +315,48 @@ decomposition has an effect here on the source as well. So a rule means that for a word like `hauptstrasse` four variants are created: `hauptstrasse`, `haupt strasse`, `hauptstr` and `haupt str`. +###### Mutations + +The 'mutation' section in the configuration describes an additional set of +replacements to be applied after the variants have been computed. + +Each mutation is described by two parameters: `pattern` and `replacements`. +The pattern must contain a single regular expression to search for in the +variant name. The regular expressions need to follow the syntax for +[Python regular expressions](file:///usr/share/doc/python3-doc/html/library/re.html#regular-expression-syntax). +Capturing groups are not permitted. +`replacements` must contain a list of strings that the pattern +should be replaced with. Each occurrence of the pattern is replaced with +all given replacements. Be mindful of combinatorial explosion of variants. + +###### Modes + +The generic analyser supports a special mode `variant-only`. When configured +then it consumes the input token and emits only variants (if any exist). Enable +the mode by adding: + +``` + mode: variant-only +``` + +to the analyser configuration. + +##### Housenumber token analyzer + +The analyzer `housenumbers` is purpose-made to analyze house numbers. It +creates variants with optional spaces between numbers and letters. Thus, +house numbers of the form '3 a', '3A', '3-A' etc. are all considered equivalent. + +The analyzer cannot be customized. + +##### Postcode token analyzer + +The analyzer `postcodes` is pupose-made to analyze postcodes. It supports +a 'lookup' variant of the token, which produces variants with optional +spaces. Use together with the clean-postcodes sanitizer. + +The analyzer cannot be customized. + ### Reconfiguration Changing the configuration after the import is currently not possible, although