The dataset gets updated once per year. Downloading is prone to be slow (can take a full day) and converting them can take hours as well.
-Replace '2018' with the current year throughout.
+Replace '2019' with the current year throughout.
1. Install the GDAL library and python bindings and the unzip tool
# Ubuntu:
sudo apt-get install python3-gdal unzip
- 2. Get the TIGER 2018 data. You will need the EDGES files
+ 2. Get the TIGER 2019 data. You will need the EDGES files
(3,233 zip files, 11GB total).
- wget -r ftp://ftp2.census.gov/geo/tiger/TIGER2018/EDGES/
+ wget -r ftp://ftp2.census.gov/geo/tiger/TIGER2019/EDGES/
3. Convert the data into SQL statements. Adjust the file paths in the scripts as needed
./convert.sh <input-path> <output-path>
4. Maybe: package the created files
-
- tar -czf tiger2018-nominatim-preprocessed.tar.gz tiger
+
+ tar -czf tiger2019-nominatim-preprocessed.tar.gz tiger
TIGER data to your own Nominatim instance by following these steps. The
entire US adds about 10GB to your database.
- 1. Get preprocessed TIGER 2018 data and unpack it into the
+ 1. Get preprocessed TIGER 2019 data and unpack it into the
data directory in your Nominatim sources:
cd Nominatim/data
- wget https://nominatim.org/data/tiger2018-nominatim-preprocessed.tar.gz
- tar xf tiger2018-nominatim-preprocessed.tar.gz
+ wget https://nominatim.org/data/tiger2019-nominatim-preprocessed.tar.gz
+ tar xf tiger2019-nominatim-preprocessed.tar.gz
`data-source/us-tiger/README.md` explains how the data got preprocessed.