3 The tokenizer module in Nominatim is responsible for analysing the names given
4 to OSM objects and the terms of an incoming query in order to make sure, they
5 can be matched appropriately.
7 Nominatim offers different tokenizer modules, which behave differently and have
8 different configuration options. This sections describes the tokenizers and how
9 they can be configured.
12 The use of a tokenizer is tied to a database installation. You need to choose
13 and configure the tokenizer before starting the initial import. Once the import
14 is done, you cannot switch to another tokenizer anymore. Reconfiguring the
15 chosen tokenizer is very limited as well. See the comments in each tokenizer
20 The legacy tokenizer implements the analysis algorithms of older Nominatim
21 versions. It uses a special Postgresql module to normalize names and queries.
22 This tokenizer is currently the default.
24 To enable the tokenizer add the following line to your project configuration:
27 NOMINATIM_TOKENIZER=legacy
30 The Postgresql module for the tokenizer is available in the `module` directory
31 and also installed with the remainder of the software under
32 `lib/nominatim/module/nominatim.so`. You can specify a custom location for
36 NOMINATIM_DATABASE_MODULE_PATH=<path to directory where nominatim.so resides>
39 This is in particular useful when the database runs on a different server.
40 See [Advanced installations](../admin/Advanced-Installations.md#importing-nominatim-to-an-external-postgresql-database) for details.
42 There are no other configuration options for the legacy tokenizer. All
43 normalization functions are hard-coded.
47 The ICU tokenizer uses the [ICU library](http://site.icu-project.org/) to
48 normalize names and queries. It also offers configurable decomposition and
49 abbreviation handling.
51 To enable the tokenizer add the following line to your project configuration:
54 NOMINATIM_TOKENIZER=icu
59 On import the tokenizer processes names in the following three stages:
61 1. During the **Sanitizer step** incoming names are cleaned up and converted to
62 **full names**. This step can be used to regularize spelling, split multi-name
63 tags into their parts and tag names with additional attributes. See the
64 [Sanitizers section](#sanitizers) below for available cleaning routines.
65 2. The **Normalization** part removes all information from the full names
66 that are not relevant for search.
67 3. The **Token analysis** step takes the normalized full names and creates
68 all transliterated variants under which the name should be searchable.
69 See the [Token analysis](#token-analysis) section below for more
72 During query time, only normalization and transliteration are relevant.
73 An incoming query is first split into name chunks (this usually means splitting
74 the string at the commas) and the each part is normalised and transliterated.
75 The result is used to look up places in the search index.
79 The ICU tokenizer is configured using a YAML file which can be configured using
80 `NOMINATIM_TOKENIZER_CONFIG`. The configuration is read on import and then
81 saved as part of the internal database status. Later changes to the variable
84 Here is an example configuration file:
89 - "ß > 'ss'" # German szet is unimbigiously equal to double ss
91 - !include /etc/nominatim/icu-rules/extended-unicode-to-asccii.yaml
94 - step: split-name-list
98 - !include icu-rules/variants-ca.yaml
101 - bridge -> bdge,br,brdg,bri,brg
104 The configuration file contains four sections:
105 `normalization`, `transliteration`, `sanitizers` and `token-analysis`.
107 #### Normalization and Transliteration
109 The normalization and transliteration sections each define a set of
110 ICU rules that are applied to the names.
112 The **normalisation** rules are applied after sanitation. They should remove
113 any information that is not relevant for search at all. Usual rules to be
114 applied here are: lower-casing, removing of special characters, cleanup of
117 The **transliteration** rules are applied at the end of the tokenization
118 process to transfer the name into an ASCII representation. Transliteration can
119 be useful to allow for further fuzzy matching, especially between different
122 Each section must contain a list of
123 [ICU transformation rules](https://unicode-org.github.io/icu/userguide/transforms/general/rules.html).
124 The rules are applied in the order in which they appear in the file.
125 You can also include additional rules from external yaml file using the
126 `!include` tag. The included file must contain a valid YAML list of ICU rules
127 and may again include other files.
130 The ICU rule syntax contains special characters that conflict with the
131 YAML syntax. You should therefore always enclose the ICU rules in
136 The sanitizers section defines an ordered list of functions that are applied
137 to the name and address tags before they are further processed by the tokenizer.
138 They allows to clean up the tagging and bring it to a standardized form more
139 suitable for building the search index.
142 Sanitizers only have an effect on how the search index is built. They
143 do not change the information about each place that is saved in the
144 database. In particular, they have no influence on how the results are
145 displayed. The returned results always show the original information as
146 stored in the OpenStreetMap database.
148 Each entry contains information of a sanitizer to be applied. It has a
149 mandatory parameter `step` which gives the name of the sanitizer. Depending
150 on the type, it may have additional parameters to configure its operation.
152 The order of the list matters. The sanitizers are applied exactly in the order
153 that is configured. Each sanitizer works on the results of the previous one.
155 The following is a list of sanitizers that are shipped with Nominatim.
157 ##### split-name-list
159 ::: nominatim.tokenizer.sanitizers.split_name_list
165 ##### strip-brace-terms
167 ::: nominatim.tokenizer.sanitizers.strip_brace_terms
173 ##### tag-analyzer-by-language
175 ::: nominatim.tokenizer.sanitizers.tag_analyzer_by_language
185 Token analyzers take a full name and transform it into one or more normalized
186 form that are then saved in the search index. In its simplest form, the
187 analyzer only applies the transliteration rules. More complex analyzers
188 create additional spelling variants of a name. This is useful to handle
189 decomposition and abbreviation.
191 The ICU tokenizer may use different analyzers for different names. To select
192 the analyzer to be used, the name must be tagged with the `analyzer` attribute
193 by a sanitizer (see for example the
194 [tag-analyzer-by-language sanitizer](#tag-analyzer-by-language)).
196 The token-analysis section contains the list of configured analyzers. Each
197 analyzer must have an `id` parameter that uniquely identifies the analyzer.
198 The only exception is the default analyzer that is used when no special
199 analyzer was selected.
201 Different analyzer implementations may exist. To select the implementation,
202 the `analyzer` parameter must be set. Currently there is only one implementation
203 `generic` which is described in the following.
205 ##### Generic token analyzer
207 The generic analyzer is able to create variants from a list of given
208 abbreviation and decomposition replacements. It takes one optional parameter
209 `variants` which lists the replacements to apply. If the section is
210 omitted, then the generic analyzer becomes a simple analyzer that only
211 applies the transliteration.
213 The variants section defines lists of replacements which create alternative
214 spellings of a name. To create the variants, a name is scanned from left to
215 right and the longest matching replacement is applied until the end of the
218 The variants section must contain a list of replacement groups. Each group
219 defines a set of properties that describes where the replacements are
220 applicable. In addition, the word section defines the list of replacements
221 to be made. The basic replacement description is of the form:
224 <source>[,<source>[...]] => <target>[,<target>[...]]
227 The left side contains one or more `source` terms to be replaced. The right side
228 lists one or more replacements. Each source is replaced with each replacement
232 The source and target terms are internally normalized using the
233 normalization rules given in the configuration. This ensures that the
234 strings match as expected. In fact, it is better to use unnormalized
235 words in the configuration because then it is possible to change the
236 rules for normalization later without having to adapt the variant rules.
240 In its standard form, only full words match against the source. There
241 is a special notation to match the prefix and suffix of a word:
244 - ~strasse => str # matches "strasse" as full word and in suffix position
245 - hinter~ => hntr # matches "hinter" as full word and in prefix position
248 There is no facility to match a string in the middle of the word. The suffix
249 and prefix notation automatically trigger the decomposition mode: two variants
250 are created for each replacement, one with the replacement attached to the word
251 and one separate. So in above example, the tokenization of "hauptstrasse" will
252 create the variants "hauptstr" and "haupt str". Similarly, the name "rote strasse"
253 triggers the variants "rote str" and "rotestr". By having decomposition work
254 both ways, it is sufficient to create the variants at index time. The variant
255 rules are not applied at query time.
257 To avoid automatic decomposition, use the '|' notation:
263 simply changes "hauptstrasse" to "hauptstr" and "rote strasse" to "rote str".
265 ###### Initial and final terms
267 It is also possible to restrict replacements to the beginning and end of a
271 - ^south => s # matches only at the beginning of the name
272 - road$ => rd # matches only at the end of the name
275 So the first example would trigger a replacement for "south 45th street" but
276 not for "the south beach restaurant".
278 ###### Replacements vs. variants
280 The replacement syntax `source => target` works as a pure replacement. It changes
281 the name instead of creating a variant. To create an additional version, you'd
282 have to write `source => source,target`. As this is a frequent case, there is
283 a shortcut notation for it:
286 <source>[,<source>[...]] -> <target>[,<target>[...]]
289 The simple arrow causes an additional variant to be added. Note that
290 decomposition has an effect here on the source as well. So a rule
296 means that for a word like `hauptstrasse` four variants are created:
297 `hauptstrasse`, `haupt strasse`, `hauptstr` and `haupt str`.
301 Changing the configuration after the import is currently not possible, although
302 this feature may be added at a later time.