-During query time, only normalization and transliteration are relevant.
-An incoming query is first split into name chunks (this usually means splitting
-the string at the commas) and the each part is normalised and transliterated.
-The result is used to look up places in the search index.
+During query time, the tokeinzer is responsible for processing incoming
+queries. This happens in two stages:
+
+1. During **query preprocessing** the incoming text is split into name
+ chunks and normalised. This usually means applying the same normalisation
+ as during the import process but may involve other processing like,
+ for example, word break detection.
+2. The **token analysis** step breaks down the query parts into tokens,
+ looks them up in the database and assigns them possible functions and
+ probabilities.
+
+Query processing can be further customized while the rest of the analysis
+is hard-coded.