This repository manages the configuration of all the servers run by the
OpenStreetMap Foundation's Operations Working Group. We use
-[Chef](https://www.chef.io/) to automated the configuration of all of our
+[Chef](https://www.chef.io/) to automate the configuration of all of our
servers.
[OSMF Operations Working Group](https://operations.osmfoundation.org/)
We make extensive use of roles to configure the servers. In general we have:
-## Server-specific roles (e.g. [faffy.rb](roles/faffy.rb))
+## Server-specific roles (e.g., [faffy.rb](roles/faffy.rb))
These deal with particular setup or quirks of a server, such as its IP address. They also include roles representing the service they are performing, and the location they are in and any particular hardware they have that needs configuration.
All our servers are [named after dragons](https://wiki.openstreetmap.org/wiki/Servers/Name_Ideas).
-## Hardware-specific roles (e.g. [hp-g9.rb](roles/hp-g9.rb))
+## Hardware-specific roles (e.g., [hp-g9.rb](roles/hp-g9.rb))
Covers anything specific to a certain piece of hardware, like a motherboard, that could apply to multiple machines.
-## Location-specific roles (e.g. [equinix-dub.rb](roles/equinix-dub.rb))
+## Location-specific roles (e.g., [equinix-dub.rb](roles/equinix-dub.rb))
These form a hierarchy of datacentres, organisations, and countries where our servers are located.
-## Service-specific roles (e.g. [web-frontend](roles/web-frontend.rb))
+## Service-specific roles (e.g., [web-frontend](roles/web-frontend.rb))
These cover the services that the server is running, and will include the recipes required for that service along with any specific configurations and other cascading roles.
git "/srv/community.openstreetmap.org/docker" do
action :sync
repository "https://github.com/discourse/discourse_docker.git"
- # Revision pin not possible as launch wrapper automatically updates git repo.
- revision "main"
- depth 1
+ # DANGER launch wrapper automatically updates git repo if rebuild method used: https://github.com/discourse/discourse_docker/blob/107ffb40fe8b1ea40e00814468db974a4f3f8e8f/launcher#L799
+ revision "136c63890674b95df1327d24270c55e4ef8e87a8"
user "root"
group "root"
notifies :run, "notify_group[discourse_container_new_data]"
notifies :run, "execute[discourse_container_data_start]", :immediately # noop if site up
notifies :run, "execute[discourse_container_web_only_bootstrap]", :immediately # site up but runs in parallel. Slow
notifies :run, "execute[discourse_container_web_only_destroy]", :immediately # site down
- notifies :run, "execute[discourse_container_data_rebuild]", :immediately # site down
+ notifies :run, "execute[discourse_container_data_destroy]", :immediately # site down
+ notifies :run, "execute[discourse_container_data_bootstrap]", :immediately # site down
+ notifies :run, "execute[discourse_container_data_start]", :immediately # site down
notifies :run, "execute[discourse_container_web_only_start]", :immediately # site restore
end
notify_group "discourse_container_new_data" do
notifies :run, "execute[discourse_container_web_only_destroy]", :immediately # site down
- notifies :run, "execute[discourse_container_data_rebuild]", :immediately # site down
+ notifies :run, "execute[discourse_container_data_destroy]", :immediately # site down
+ notifies :run, "execute[discourse_container_data_bootstrap]", :immediately # site down
+ notifies :run, "execute[discourse_container_data_start]", :immediately # site down
notifies :run, "execute[discourse_container_web_only_start]", :immediately # site restore
end
notify_group "discourse_container_new_mail_receiver" do
- notifies :run, "execute[discourse_container_mail_receiver_rebuild]", :immediately
+ notifies :run, "execute[discourse_container_mail_receiver_destroy]", :immediately
+ notifies :run, "execute[discourse_container_mail_receiver_bootstrap]", :immediately
+ notifies :run, "execute[discourse_container_mail_receiver_start]", :immediately
end
# Attempt at a failsafe to ensure all containers are running
notifies :run, "execute[discourse_container_mail_receiver_start]", :delayed
end
-execute "discourse_container_data_start" do
+execute "discourse_container_data_bootstrap" do
action :nothing
- command "./launcher start data"
+ command "./launcher bootstrap data"
+ cwd "/srv/community.openstreetmap.org/docker/"
+ user "root"
+ group "root"
+end
+
+execute "discourse_container_data_destroy" do
+ action :nothing
+ command "./launcher destroy data"
cwd "/srv/community.openstreetmap.org/docker/"
user "root"
group "root"
end
-execute "discourse_container_data_rebuild" do
+execute "discourse_container_data_start" do
action :nothing
- command "./launcher rebuild data"
+ command "./launcher start data"
cwd "/srv/community.openstreetmap.org/docker/"
user "root"
group "root"
group "root"
end
-# Rebuild: Stop Destroy Bootstap Start
-execute "discourse_container_mail_receiver_rebuild" do
+execute "discourse_container_mail_receiver_bootstrap" do
+ action :nothing
+ command "./launcher bootstrap mail-receiver"
+ cwd "/srv/community.openstreetmap.org/docker/"
+ user "root"
+ group "root"
+end
+
+execute "discourse_container_mail_receiver_destroy" do
action :nothing
- command "./launcher rebuild mail-receiver"
+ command "./launcher destroy mail-receiver"
cwd "/srv/community.openstreetmap.org/docker/"
user "root"
group "root"
This cookbook configures development servers, such as dev.openstreetmap.org. It
installs packages required by the users and configures apache for the various
-user and api developement sites.
+user and api development sites.
gnuplot-nox
golang
graphviz
+ htop
irssi
jq
libargon2-dev
lzip
lzop
mailutils
+ moreutils
make
nano
ncftp
osmium-tool
osmosis
pandoc
- pandoc
pbzip2
php-apcu
php-cgi
unrar
unzip
whois
+ xxd
zip
zlib1g-dev
]
# dhcpd Cookbook
-Configures the dhcpd service, which used for the internal network at UCL.
+Configures the dhcpd service, which is used for our internal networks.
cache_dir = Chef::Config[:file_cache_path]
-dnscontrol_version = "4.15.1"
+dnscontrol_version = "4.15.5"
dnscontrol_arch = if arm?
"arm64"
multi_domain = false
hosts_try_dane =
tls_require_ciphers = <%= node[:ssl][:gnutls_ciphers] %>:%LATEST_RECORD_VERSION
+<% if node[:exim][:external_interface] -%>
+ interface = <%= node[:exim][:external_interface] %>
+<% end -%>
# This transport is used for handling pipe deliveries generated by alias or
copyright "Commonwealth of Australia (Geoscience Australia) - Creative Commons Attribution 4.0 International Licence"
background_colour "0 0 0" # Black
projection "EPSG:3857"
- source "/store/imagery/au/agri/combine.vrt"
+ source "/store/imagery/au/agri/combine-cutline-cog.tif"
max_zoom 17
- revision 1
+ revision 3
end
image container_image
volume :"/store/imagery" => "/store/imagery",
:"/srv/imagery/sockets" => "/sockets"
- environment :BIND => "unix:/sockets/titiler.sock",
- :WORKERS_PER_CORE => 1,
- :GDAL_CACHEMAX => 200,
+ environment :GDAL_CACHEMAX => 200,
:GDAL_BAND_BLOCK_CACHE => "HASHSET",
:GDAL_DISABLE_READDIR_ON_OPEN => "EMPTY_DIR",
:GDAL_INGESTED_BYTES_AT_OPEN => 32768,
:VSI_CACHE_SIZE => 5000000,
:TITILER_API_ROOT_PATH => "/api/v1/titiler",
:FORWARDED_ALLOW_IPS => "*" # https://docs.gunicorn.org/en/latest/settings.html#forwarded-allow-ips
+ command "gunicorn -k uvicorn.workers.UvicornWorker titiler.application.main:app --bind unix:/sockets/titiler.sock --workers #{node.cpu_cores}"
end
systemd_service "titiler-restart" do
"MS_ERRORFILE" => "stderr",
"GDAL_CACHEMAX" => "128"
limit_nofile 16384
- memory_max "4G"
+ memory_high "12G"
+ memory_max "12G"
user "imagery"
group "imagery"
exec_start "/usr/bin/multiwatch -f 8 --signal=TERM -- /usr/lib/cgi-bin/mapserv"
STATUS DEFAULT
TYPE RASTER
PROCESSING "RESAMPLE=AVERAGE"
- PROCESSING "CLOSE_CONNECTION=DEFER"
END # layer
END
<% require 'uri' %>
# DO NOT EDIT - This file is being maintained by Chef
-location ~* "^/layer/<%= @layer %>/(\d+)/(\d+)/(\d+)\.(png|jpg|jpeg)$" {
+location ~* "^/layer/<%= @layer %>/(\d+)/(\d+)/(\d+)\.(jpg|jpeg|png|webp)$" {
<% if @uses_tiler -%>
set $args "";
- rewrite ^/layer/<%= @layer %>/(\d+)/(\d+)/(\d+)\.jpg /mosaicjson/tiles/WebMercatorQuad/$1/$2/$3@1x?url=<%= URI.encode_www_form_component(@source) %>&pixel_selection=first&tile_format=jpeg break;
- rewrite ^/layer/<%= @layer %>/(\d+)/(\d+)/(\d+)\.jpeg /mosaicjson/tiles/WebMercatorQuad/$1/$2/$3@1x?url=<%= URI.encode_www_form_component(@source) %>&pixel_selection=first&tile_format=jpeg break;
- rewrite ^/layer/<%= @layer %>/(\d+)/(\d+)/(\d+)\.png /mosaicjson/tiles/WebMercatorQuad/$1/$2/$3@1x?url=<%= URI.encode_www_form_component(@source) %>&pixel_selection=first&tile_format=png break;
+ rewrite ^/layer/<%= @layer %>/(\d+)/(\d+)/(\d+)\.(jpg|jpeg|png|webp) /mosaicjson/tiles/WebMercatorQuad/$1/$2/$3@1x.$4?url=<%= URI.encode_www_form_component(@source) %>&pixel_selection=first break;
proxy_pass http://<%= @site %>_tiler_backend;
proxy_set_header Host $host;
proxy_set_header Referer $http_referer;
proxy_set_header Cache-Control "";
proxy_set_header Pragma "";
proxy_redirect off;
- proxy_cache_key "<%= @layer %><%= @revision %> $request_method $1 $2 $3";
+ proxy_cache_key "<%= @layer %><%= @revision %> $request_method $1 $2 $3 $4";
proxy_cache proxy_cache_zone;
proxy_cache_valid 200 204 180d;
proxy_cache_use_stale error timeout updating http_502 http_503 http_504;
}
<% if @root_layer -%>
-rewrite "^/(\d+)/(\d+)/(\d+)\.(png|jpg|jpeg)$" "/layer/<%= @layer %>/$1/$2/$3.$4" last;
+rewrite "^/(\d+)/(\d+)/(\d+)\.(jpg|jpeg|png|webp)$" "/layer/<%= @layer %>/$1/$2/$3.$4" last;
<% end -%>
<% @url_aliases.each do |url| -%>
-rewrite "^<%= url %>/(\d+)/(\d+)/(\d+)\.(png|jpg|jpeg)$" "/layer/<%= @layer %>/$1/$2/$3.$4" last;
+rewrite "^<%= url %>/(\d+)/(\d+)/(\d+)\.(jpg|jpeg|png|webp)$" "/layer/<%= @layer %>/$1/$2/$3.$4" last;
<% end -%>
composer
unzip
ffmpeg
+ firejail
]
# Mediawiki enhanced difference engine
if new_resource.commons
mediawiki_extension "QuickInstantCommons" do
site new_resource.site
+ template "mw-ext-QuickInstantCommons.inc.php.erb"
update_site false
end
else
--- /dev/null
+<?php
+# DO NOT EDIT - This file is being maintained by Chef
+wfLoadExtension( 'QuickInstantCommons' );
+$wgUseQuickInstantCommons = false; // Disable as we manually set via wgForeignFileRepos
+$wgForeignFileRepos[] = [
+ 'class' => '\MediaWiki\Extension\QuickInstantCommons\Repo',
+ 'name' => 'wikimediacommons',
+ 'directory' => $wgUploadDirectory,
+ 'apibase' => 'https://commons.wikimedia.org/w/api.php',
+ 'hashLevels' => 2,
+ 'thumbUrl' => 'https://upload.wikimedia.org/wikipedia/commons/thumb',
+ 'fetchDescription' => true,
+ 'descriptionCacheExpiry' => 60*60*24*30,
+ 'transformVia404' => true,
+ 'abbrvThreshold' => 160,
+ 'apiMetadataExpiry' => 60*60*24*30,
+ 'disabledMediaHandlers' => [TiffHandler::class]
+];
def add_comments(xml, cs)
# grab the visible changeset comments as well
- res = @conn.exec("select cc.author_id, u.display_name as author, cc.body, cc.created_at from changeset_comments cc join users u on cc.author_id=u.id where cc.changeset_id=#{cs.id} and cc.visible order by cc.created_at asc")
+ res = @conn.exec("select cc.id, cc.author_id, u.display_name as author, cc.body, (cc.created_at at time zone 'utc') as created_at from changeset_comments cc join users u on cc.author_id=u.id where cc.changeset_id=#{cs.id} and cc.visible order by cc.created_at asc")
xml["comments_count"] = res.num_tuples.to_s
# early return if there aren't any comments
discussion = XML::Node.new("discussion")
res.each do |row|
comment = XML::Node.new("comment")
+ comment["id"] = row["id"]
comment["uid"] = row["author_id"]
comment["user"] = xml_sanitize(row["author"])
comment["date"] = Time.parse(row["created_at"]).getutc.xmlschema
# sync a directory to guarantee it's on disk. have to recurse to the root
# to guarantee sync for newly created directories.
def fdirsync(d)
- while d != "/"
+ while d != "/" && d != "."
fsync(d)
d = File.dirname(d)
end
class Replicator
def initialize(config)
@config = YAML.safe_load(File.read(config))
- @state = YAML.safe_load(File.read(@config["state_file"]), [Time])
+ @state = YAML.safe_load(File.read(@config["state_file"]), :permitted_classes => [Time], :fallback => {})
@conn = PG::Connection.connect(@config["db"])
# get current time from the database rather than the current system
@now = @conn.exec("select now() as now").map { |row| Time.parse(row["now"]) }[0]
# for us to look at anything that was closed recently, and filter from
# there.
changesets = @conn
- .exec("select id, created_at, closed_at, num_changes from changesets where closed_at > ((now() at time zone 'utc') - '1 hour'::interval)")
+ .exec("select id, (created_at at time zone 'utc') as created_at, (closed_at at time zone 'utc') as closed_at, num_changes from changesets where (closed_at at time zone 'utc') > ((now() at time zone 'utc') - '1 hour'::interval)")
.map { |row| Changeset.new(row) }
.select { |cs| cs.activity_between?(last_run, @now) }
# but also add any changesets which have new comments
new_ids = @conn
- .exec("select distinct changeset_id from changeset_comments where created_at >= '#{last_run}' and created_at < '#{@now}' and visible")
+ .exec("select distinct changeset_id from changeset_comments where (created_at at time zone 'utc') >= '#{last_run}' and (created_at at time zone 'utc') < '#{@now}' and visible")
.map { |row| row["changeset_id"].to_i }
.reject { |c_id| cs_ids.include?(c_id) }
new_ids.each do |id|
@conn
- .exec("select id, created_at, closed_at, num_changes from changesets where id=#{id}")
+ .exec("select id, (created_at at time zone 'utc') as created_at, (closed_at at time zone 'utc') as closed_at, num_changes from changesets where id=#{id}")
.map { |row| Changeset.new(row) }
.each { |cs| changesets << cs }
end
property :ports, Hash, :default => {}
property :environment, Hash, :default => {}
property :volume, Hash, :default => {}
+property :command, String, :default => ""
action :create do
systemd_service new_resource.service do
notify_access "all"
environment "PODMAN_SYSTEMD_UNIT" => "%n"
exec_start_pre "/bin/rm --force %t/%n.ctr-id"
- exec_start "/usr/bin/podman run --cidfile=%t/%n.ctr-id --cgroups=no-conmon --userns=auto --label=io.containers.autoupdate=registry --pids-limit=-1 #{publish_options} #{environment_options} #{volume_options} --rm --sdnotify=conmon --detach --replace --name=%N #{new_resource.image}"
+ exec_start "/usr/bin/podman run --cidfile=%t/%n.ctr-id --cgroups=no-conmon "\
+ "--userns=auto --label=io.containers.autoupdate=registry "\
+ "--pids-limit=-1 #{publish_options} #{environment_options} "\
+ "#{volume_options} --rm --sdnotify=conmon --detach --replace "\
+ "--name=%N #{new_resource.image} #{new_resource.command}"
exec_stop "/usr/bin/podman stop --ignore --time=10 --cidfile=%t/%n.ctr-id"
exec_stop_post "/usr/bin/podman rm --force --ignore --cidfile=%t/%n.ctr-id"
timeout_start_sec 180
prometheus_exporter "smokeping" do
port 9374
+ environment "GOMAXPROCS" => "1"
options "--config.file=/etc/prometheus/exporters/smokeping.yml"
capability_bounding_set "CAP_NET_RAW"
ambient_capabilities "CAP_NET_RAW"
settings["opensearch"]["contact"] = "webmaster@openstreetmap.org"
settings["paths"]["bin_dir"] = "#{directory}/build/src"
settings["sources"]["download"] = ""
- settings["sources"]["create"] = "db languages projects wiki wikidata chronology"
+ settings["sources"]["create"] = "db languages projects wiki wikidata chronology sw"
settings["sources"]["db"]["planetfile"] = "/var/lib/planet/planet.osh.pbf"
settings["sources"]["chronology"]["osm_history_file"] = "/var/lib/planet/planet.osh.pbf"
settings["tagstats"]["geodistribution"] = "DenseMmapArray"
owner "root"
group "root"
mode "755"
- variables :tilekiln_bin => "#{tilekiln_directory}/bin/tilekiln", :source_database => "spirit", :config_path => "#{shortbread_config}", :diff_size => "1000", :tiles_file => "/srv/vector.openstreetmap.org/data/tiles.txt", :post_processing => "/usr/local/bin/tiles-rerender"
+ variables :tilekiln_bin => "#{tilekiln_directory}/bin/tilekiln", :source_database => "spirit", :config_path => "#{shortbread_config}", :diff_size => "1000", :expiry_dir => "/srv/vector.openstreetmap.org/data/", :post_processing => "/usr/local/bin/tiles-rerender"
end
template "/usr/local/bin/tiles-rerender" do
owner "root"
group "root"
mode "755"
- variables :tilekiln_bin => "#{tilekiln_directory}/bin/tilekiln", :source_database => "spirit", :storage_database => "tiles", :config_path => "#{shortbread_config}", :tiles_file => "/srv/vector.openstreetmap.org/data/tiles.txt", :update_threads => 4
+ variables :tilekiln_bin => "#{tilekiln_directory}/bin/tilekiln", :source_database => "spirit", :storage_database => "tiles", :config_path => "#{shortbread_config}", :expiry_dir => "/srv/vector.openstreetmap.org/data/", :update_threads => 4
end
systemd_service "replicate" do
set -e
-export LUA_PATH='/srv/vector.openstreetmap.org/osm2pgsql-themepark/lua/?.lua;/srv/vector.openstreetmap.org/spirit/?.lua;;'
+export LUA_PATH='/srv/vector.openstreetmap.org/osm2pgsql-themepark/lua/?.lua;;'
# Import the osm2pgsql file specified as an argument, using the locations for spirit
osm2pgsql \
#!/bin/sh
set -eu
-<%= @tilekiln_bin %> generate tiles \
+
+cd "<%= @expiry_dir %>"
+
+wc -l z*.txt
+cat z*.txt | <%= @tilekiln_bin %> generate tiles \
--source-dbname "<%= @source_database %>" \
--storage-dbname "<%= @storage_database %>" \
--num-threads "<%= node[:vectortile][:replication][:threads] %>" \
---config <%= @config_path %> < <%= @tiles_file %>
+--config <%= @config_path %>
#!/bin/sh
# Usage
-# sudo -u tilekiln vector-update
+# sudo -u tileupdate vector-update
set -eu
-export LUA_PATH='/srv/vector.openstreetmap.org/osm2pgsql-themepark/lua/?.lua;/srv/vector.openstreetmap.org/spirit/?.lua;;'
+export LUA_PATH='/srv/vector.openstreetmap.org/osm2pgsql-themepark/lua/?.lua;;'
+cd "<%= @expiry_dir %>"
osm2pgsql-replication update \
-d "<%= @source_database %>" \
--max-diff-size "<%= @diff_size %>"
set -eu
-export LUA_PATH='/srv/vector.openstreetmap.org/osm2pgsql-themepark/lua/?.lua;/srv/vector.openstreetmap.org/spirit/?.lua;;'
+export LUA_PATH='/srv/vector.openstreetmap.org/osm2pgsql-themepark/lua/?.lua;;'
+cd "<%= @expiry_dir %>"
osm2pgsql-replication update \
-d "<%= @source_database %>" \
--max-diff-size "<%= @diff_size %>" \
- --post-processing "<%= @post_processing %>" \
- -- --expire-tiles=10-14 \
- --expire-output="<%= @tiles_file %>"
+ --post-processing "<%= @post_processing %>"
end
mediawiki_site "wiki.openstreetmap.org" do
- aliases ["wiki.osm.org", "wiki.openstreetmap.com", "wiki.openstreetmap.net",
- "wiki.openstreetmap.ca", "wiki.openstreetmap.eu",
- "wiki.openstreetmap.pro", "wiki.openstreetmaps.org",
+ aliases ["wiki.osm.org", "wiki.openstreetmap.com", "wiki.openstreetmaps.org",
"osm.wiki", "www.osm.wiki", "wiki.osm.wiki"]
fpm_max_children 200
exec_start "/usr/bin/php w/maintenance/dumpBackup.php --full --quiet --output=gzip:dump/dump.xml.gz"
working_directory "/srv/wiki.openstreetmap.org"
user "wiki"
+ nice 19
sandbox :enable_network => true
memory_deny_write_execute false
restrict_address_families "AF_UNIX"
systemd_timer "wiki-dump" do
description "Wiki dump"
- on_calendar "02:00"
+ on_calendar "Sun 02:30"
end
service "wiki-dump.timer" do
+++ /dev/null
-name "aarnet"
-description "Role applied to all servers at AARNet"
-
-default_attributes(
- :accounts => {
- :users => {
- :chm => { :status => :administrator },
- :bclifford => { :status => :administrator }
- }
- },
- :hosted_by => "AARNet",
- :location => "Carlton, Victoria, Australia",
- :timezone => "Australia/Melbourne"
-)
-
-override_attributes(
- :networking => {
- :nameservers => ["202.158.207.1", "202.158.207.2"]
- },
- :ntp => {
- :servers => ["0.au.pool.ntp.org", "1.au.pool.ntp.org", "oceania.pool.ntp.org"]
- }
-)
-
-run_list(
- "role[au]"
-)
+++ /dev/null
-name "balerion"
-description "Master role applied to balerion"
-
-default_attributes(
- :networking => {
- :interfaces => {
- :external => {
- :interface => "bond0",
- :role => :external,
- :inet => {
- :address => "138.44.68.134",
- :prefix => "30",
- :gateway => "138.44.68.133"
- },
- :bond => {
- :slaves => %w[ens14f0np0 ens14f1np1]
- }
- }
- }
- },
- :postgresql => {
- :settings => {
- :defaults => {
- :effective_cache_size => "16GB"
- }
- }
- },
- :sysctl => {
- :postgres => {
- :comment => "Increase shared memory for postgres",
- :parameters => {
- "kernel.shmmax" => 9 * 1024 * 1024 * 1024,
- "kernel.shmall" => 9 * 1024 * 1024 * 1024 / 4096
- }
- }
- },
- :tile => {
- :database => {
- :cluster => "16/main",
- :postgis => "3"
- },
- :mapnik => "3.1",
- :replication => {
- :directory => "/store/replication"
- },
- :styles => {
- :default => {
- :tile_directories => [
- { :name => "/store/tiles/default", :min_zoom => 0, :max_zoom => 19 }
- ]
- }
- }
- }
-)
-
-run_list(
- "role[aarnet]",
- "role[geodns]",
- "role[tile]"
-)
+++ /dev/null
-name "bowser"
-description "Master role applied to bowser"
-
-default_attributes(
- :networking => {
- :interfaces => {
- :external => {
- :interface => "bond0",
- :role => :external,
- :inet => {
- :address => "138.44.68.106",
- :prefix => "30",
- :gateway => "138.44.68.105"
- },
- :bond => {
- :slaves => %w[ens14f0np0 ens14f1np1]
- }
- }
- }
- },
- :postgresql => {
- :settings => {
- :defaults => {
- :effective_cache_size => "16GB"
- }
- }
- },
- :sysctl => {
- :postgres => {
- :comment => "Increase shared memory for postgres",
- :parameters => {
- "kernel.shmmax" => 9 * 1024 * 1024 * 1024,
- "kernel.shmall" => 9 * 1024 * 1024 * 1024 / 4096
- }
- }
- },
- :tile => {
- :database => {
- :cluster => "16/main",
- :postgis => "3"
- },
- :mapnik => "3.1",
- :replication => {
- :directory => "/store/replication"
- },
- :styles => {
- :default => {
- :tile_directories => [
- { :name => "/store/tiles/default", :min_zoom => 0, :max_zoom => 19 }
- ]
- }
- }
- }
-)
-
-run_list(
- "role[aarnet]",
- "role[tile]"
-)
},
:external => {
:zone => "dub",
+ :inet => {
+ :rules => [
+ { :to => "10.0.0.0/8", :table => "main", :priority => 50 },
+ { :to => "172.16.0.0/12", :table => "main", :priority => 50 },
+ { :to => "192.168.0.0/16", :table => "main", :priority => 50 }
+ ]
+ },
:inet6 => {
:rules => [
{ :to => "2600:9000::/28", :table => 150, :priority => 100 }
:last_address => "10.0.79.254"
},
:exim => {
+ :external_interface => "<;${if <{${randint:100}}{75} {184.104.226.98;2001:470:1:b3b::2}{87.252.214.98;2001:4d78:fe03:1c::2}}",
:routes => {
:openstreetmap => {
:comment => "openstreetmap.org",
:metric => 150,
:source_route_table => 150,
:inet => {
- :address => "87.252.214.101",
+ :address => "87.252.214.104",
:prefix => "27",
:gateway => "87.252.214.97"
},
+++ /dev/null
-name "ovh"
-description "Role applied to all servers at OVH"
-
-default_attributes(
- :hosted_by => "OVH",
- :location => "Roubaix, France"
-)
-
-override_attributes(
- :networking => {
- :nameservers => ["213.186.33.99"]
- },
- :ntp => {
- :servers => ["0.fr.pool.ntp.org", "1.fr.pool.ntp.org", "europe.pool.ntp.org"]
- }
-)
-
-run_list(
- "role[fr]"
-)
+++ /dev/null
-name "scorch"
-description "Master role applied to scorch"
-
-default_attributes(
- :devices => {
- :ssd_system => {
- :comment => "Tune scheduler for system disk",
- :type => "block",
- :bus => "scsi",
- :serial => "3600605b009bbf5601fc3206407a43546",
- :attrs => {
- "queue/scheduler" => "noop",
- "queue/nr_requests" => "256",
- "queue/read_ahead_kb" => "2048"
- }
- }
- },
- :networking => {
- :interfaces => {
- :external => {
- :interface => "eth0",
- :role => :external,
- :inet => {
- :address => "176.31.235.79",
- :prefix => "24",
- :gateway => "176.31.235.254"
- },
- :inet6 => {
- :address => "2001:41d0:2:fc4f::1",
- :prefix => "64",
- :gateway => "2001:41d0:2:fcff:ff:ff:ff:ff"
- }
- }
- }
- }
-)
-
-run_list(
- "role[ovh]"
-)
:max_connections_per_child => 10000
},
:evasive => {
- :page_count => 250,
+ :page_count => 400,
:site_count => 500
}
},