Also fixes a path issue during API test DB creation that could
never possibly have worked.
| 0 |
Then there are duplicates
| 0 |
Then there are duplicates
Scenario: Search with bounded viewbox in right area
Scenario: Search with bounded viewbox in right area
- When sending json search query "bar" with address
+ When sending json search query "post" with address
| bounded | viewbox |
| 1 | 9,47,10,48 |
Then result addresses contain
| ID | town |
| 0 | Vaduz |
| bounded | viewbox |
| 1 | 9,47,10,48 |
Then result addresses contain
| ID | town |
| 0 | Vaduz |
- When sending json search query "bar" with address
+ When sending json search query "post" with address
| bounded | viewbox |
| 1 | 9.49712,47.17122,9.52605,47.16242 |
Then result addresses contain
| bounded | viewbox |
| 1 | 9.49712,47.17122,9.52605,47.16242 |
Then result addresses contain
Then result has centroid in 9.49712,47.16242,9.52605,47.17122
Scenario: Prefer results within viewbox
Then result has centroid in 9.49712,47.16242,9.52605,47.17122
Scenario: Prefer results within viewbox
- When sending json search query "Gässle" with address
- | accept-language |
- | en |
- Then result addresses contain
- | ID | town |
- | 0 | Balzers |
When sending json search query "Gässle" with address
| accept-language | viewbox |
| en | 9.52413,47.10759,9.53140,47.10539 |
Then result addresses contain
| ID | village |
| 0 | Triesen |
When sending json search query "Gässle" with address
| accept-language | viewbox |
| en | 9.52413,47.10759,9.53140,47.10539 |
Then result addresses contain
| ID | village |
| 0 | Triesen |
+ When sending json search query "Gässle" with address
+ | accept-language | viewbox |
+ | en | 9.45949,47.08421,9.54094,47.05466 |
+ Then result addresses contain
+ | ID | town |
+ | 0 | Balzers |
Scenario: viewboxes cannot be points
When sending json search query "foo"
Scenario: viewboxes cannot be points
When sending json search query "foo"
self.api_db_done = True
if not self._reuse_or_drop_db(self.api_test_db):
self.api_db_done = True
if not self._reuse_or_drop_db(self.api_test_db):
- testdata = Path('__file__') / '..' / '..' / 'testdb'
- self.test_env['NOMINATIM_WIKIPEDIA_DATA_PATH'] = str(testdata.resolve())
+ testdata = (Path(__file__) / '..' / '..' / '..' / 'testdb').resolve()
+ self.test_env['NOMINATIM_WIKIPEDIA_DATA_PATH'] = str(testdata)
+ simp_file = Path(self.website_dir.name) / 'secondary_importance.sql.gz'
+ simp_file.symlink_to(testdata / 'secondary_importance.sql.gz')
try:
self.run_nominatim('import', '--osm-file', str(self.api_test_file))
try:
self.run_nominatim('import', '--osm-file', str(self.api_test_file))
- self.run_nominatim('add-data', '--tiger-data', str((testdata / 'tiger').resolve()))
+ self.run_nominatim('add-data', '--tiger-data', str(testdata / 'tiger'))
self.run_nominatim('freeze')
if self.tokenizer == 'legacy':
self.run_nominatim('freeze')
if self.tokenizer == 'legacy':
- phrase_file = str((testdata / 'specialphrases_testdb.sql').resolve())
+ phrase_file = str(testdata / 'specialphrases_testdb.sql')
run_script(['psql', '-d', self.api_test_db, '-f', phrase_file])
else:
run_script(['psql', '-d', self.api_test_db, '-f', phrase_file])
else:
- csv_path = str((testdata / 'full_en_phrases_test.csv').resolve())
+ csv_path = str(testdata / 'full_en_phrases_test.csv')
self.run_nominatim('special-phrases', '--import-from-csv', csv_path)
except:
self.db_drop_database(self.api_test_db)
self.run_nominatim('special-phrases', '--import-from-csv', csv_path)
except:
self.db_drop_database(self.api_test_db)