1 # Writing custom sanitizer and token analysis modules for the ICU tokenizer
3 The [ICU tokenizer](../customize/Tokenizers.md#icu-tokenizer) provides a
4 highly customizable method to pre-process and normalize the name information
5 of the input data before it is added to the search index. It comes with a
6 selection of sanitizers and token analyzers which you can use to adapt your
7 installation to your needs. If the provided modules are not enough, you can
8 also provide your own implementations. This section describes the API
9 of sanitizers and token analysis.
12 This API is currently in early alpha status. While this API is meant to
13 be a public API on which other sanitizers and token analyzers may be
14 implemented, it is not guaranteed to be stable at the moment.
17 ## Using non-standard modules
19 Sanitizer names (in the `step` property), token analysis names (in the
20 `analyzer`) and query preprocessor names (in the `step` property)
21 may refer to externally supplied modules. There are two ways
22 to include external modules: through a library or from the project directory.
24 To include a module from a library, use the absolute import path as name and
25 make sure the library can be found in your PYTHONPATH.
27 To use a custom module without creating a library, you can put the module
28 somewhere in your project directory and then use the relative path to the
29 file. Include the whole name of the file including the `.py` ending.
31 ## Custom query preprocessors
33 A query preprocessor must export a single factory function `create` with
34 the following signature:
37 create(self, config: QueryConfig) -> Callable[[list[Phrase]], list[Phrase]]
40 The function receives the custom configuration for the preprocessor and
41 returns a callable (function or class) with the actual preprocessing
42 code. When a query comes in, then the callable gets a list of phrases
43 and needs to return the transformed list of phrases. The list and phrases
44 may be changed in place or a completely new list may be generated.
46 The `QueryConfig` is a simple dictionary which contains all configuration
47 options given in the yaml configuration of the ICU tokenizer. It is up to
48 the function to interpret the values.
50 A `nominatim_api.search.Phrase` describes a part of the query that contains one or more independent
51 search terms. Breaking a query into phrases helps reducing the number of
52 possible tokens Nominatim has to take into account. However a phrase break
53 is definitive: a multi-term search word cannot go over a phrase break.
54 A Phrase object has two fields:
56 * `ptype` further refines the type of phrase (see list below)
57 * `text` contains the query text for the phrase
59 The order of phrases matters to Nominatim when doing further processing.
60 Thus, while you may split or join phrases, you should not reorder them
61 unless you really know what you are doing.
63 Phrase types (`nominatim_api.search.PhraseType`) can further help narrowing
64 down how the tokens in the phrase are interpreted. The following phrase types
67 ::: nominatim_api.search.PhraseType
72 ## Custom sanitizer modules
74 A sanitizer module must export a single factory function `create` with the
78 def create(config: SanitizerConfig) -> Callable[[ProcessInfo], None]
81 The function receives the custom configuration for the sanitizer and must
82 return a callable (function or class) that transforms the name and address
83 terms of a place. When a place is processed, then a `ProcessInfo` object
84 is created from the information that was queried from the database. This
85 object is sequentially handed to each configured sanitizer, so that each
86 sanitizer receives the result of processing from the previous sanitizer.
87 After the last sanitizer is finished, the resulting name and address lists
88 are forwarded to the token analysis module.
90 Sanitizer functions are instantiated once and then called for each place
91 that is imported or updated. They don't need to be thread-safe.
92 If multi-threading is used, each thread creates their own instance of
95 ### Sanitizer configuration
97 ::: nominatim_db.tokenizer.sanitizers.config.SanitizerConfig
101 ### The main filter function of the sanitizer
103 The filter function receives a single object of type `ProcessInfo`
104 which has with three members:
106 * `place: PlaceInfo`: read-only information about the place being processed.
108 * `names: List[PlaceName]`: The current list of names for the place.
109 * `address: List[PlaceName]`: The current list of address names for the place.
111 While the `place` member is provided for information only, the `names` and
112 `address` lists are meant to be manipulated by the sanitizer. It may add and
113 remove entries, change information within a single entry (for example by
114 adding extra attributes) or completely replace the list with a different one.
116 #### PlaceInfo - information about the place
118 ::: nominatim_db.data.place_info.PlaceInfo
123 #### PlaceName - extended naming information
125 ::: nominatim_db.data.place_name.PlaceName
130 ### Example: Filter for US street prefixes
132 The following sanitizer removes the directional prefixes from street names
139 def _filter_function(obj):
140 if obj.place.country_code == 'us' \
141 and obj.place.rank_address >= 26 and obj.place.rank_address <= 27:
142 for name in obj.names:
143 name.name = re.sub(r'^(north|south|west|east) ',
149 return _filter_function
152 This is the most simple form of a sanitizer module. If defines a single
153 filter function and implements the required `create()` function by returning
156 The filter function first checks if the object is interesting for the
157 sanitizer. Namely it checks if the place is in the US (through `country_code`)
158 and it the place is a street (a `rank_address` of 26 or 27). If the
159 conditions are met, then it goes through all available names and
160 removes any leading directional prefix using a simple regular expression.
162 Save the source code in a file in your project directory, for example as
163 `us_streets.py`. Then you can use the sanitizer in your `icu_tokenizer.yaml`:
168 - step: us_streets.py
173 This example is just a simplified show case on how to create a sanitizer.
174 It is not really meant for real-world use: while the sanitizer would
175 correctly transform `West 5th Street` into `5th Street`. it would also
176 shorten a simple `North Street` to `Street`.
178 For more sanitizer examples, have a look at the sanitizers provided by Nominatim.
179 They can be found in the directory
180 [`src/nominatim_db/tokenizer/sanitizers`](https://github.com/osm-search/Nominatim/tree/master/src/nominatim_db/tokenizer/sanitizers).
183 ## Custom token analysis module
185 ::: nominatim_db.tokenizer.token_analysis.base.AnalysisModule
190 ::: nominatim_db.tokenizer.token_analysis.base.Analyzer
194 ### Example: Creating acronym variants for long names
196 The following example of a token analysis module creates acronyms from
197 very long names and adds them as a variant:
201 """ This class is the actual analyzer.
203 def __init__(self, norm, trans):
208 def get_canonical_id(self, name):
209 # In simple cases, the normalized name can be used as a canonical id.
210 return self.norm.transliterate(name.name).strip()
213 def compute_variants(self, name):
214 # The transliterated form of the name always makes up a variant.
215 variants = [self.trans.transliterate(name)]
217 # Only create acronyms from very long words.
219 # Take the first letter from each word to form the acronym.
220 acronym = ''.join(w[0] for w in name.split())
221 # If that leds to an acronym with at least three letters,
222 # add the resulting acronym as a variant.
224 # Never forget to transliterate the variants before returning them.
225 variants.append(self.trans.transliterate(acronym))
229 # The following two functions are the module interface.
231 def configure(rules, normalizer, transliterator):
232 # There is no configuration to parse and no data to set up.
233 # Just return an empty configuration.
237 def create(normalizer, transliterator, config):
238 # Return a new instance of our token analysis class above.
239 return AcronymMaker(normalizer, transliterator)
242 Given the name `Trans-Siberian Railway`, the code above would return the full
243 name `Trans-Siberian Railway` and the acronym `TSR` as variant, so that
244 searching would work for both.
246 ## Sanitizers vs. Token analysis - what to use for variants?
248 It is not always clear when to implement variations in the sanitizer and
249 when to write a token analysis module. Just take the acronym example
250 above: it would also have been possible to write a sanitizer which adds the
251 acronym as an additional name to the name list. The result would have been
252 similar. So which should be used when?
254 The most important thing to keep in mind is that variants created by the
255 token analysis are only saved in the word lookup table. They do not need
256 extra space in the search index. If there are many spelling variations, this
257 can mean quite a significant amount of space is saved.
259 When creating additional names with a sanitizer, these names are completely
260 independent. In particular, they can be fed into different token analysis
261 modules. This gives a much greater flexibility but at the price that the
262 additional names increase the size of the search index.