1. Create a new project in Google Cloud Platform
2. Go to "VPC network" > "Firewall rules"
3. Click on "default-allow-http" > "Edit", and change tcp: 80 to 80,81. Save and then return to the list of Firewall rules.
4. Click on "default-allow-https" > "Edit", and change tcp: 443 to 443,444. Save.
5. Create a new public / private key pair with `ssh-keygen` or use an existing pair (usually `~/.ssh/id_rsa.pub` and `~/.ssh/id_rsa`, respectively)
6. Go to "Compute Engine" > "Metadata" > "SSH Keys" and paste in a public key, but change the comment to "root"<sup id="a1">[1]#f1</sup>
7. Go to "Compute Engine" > "VM instances"
8. Create an instance that is:
    * (Region): geographically close to customers for lowest latency
    * Debian 10 on SSD<sup id="a2">[2]#f2</sup>
        * OS and dependencies are about 3.2GB
        * OSRM map files are about (1.8(car) + 3(foot))GB for all of Canada
        * Add more space<sup id="a3">[3]#f3</sup> as necessary for database and logs
    * Allow HTTP and HTTPS traffic
    * The database scales linearly with threads and RAM, so increase<sup id="a5">[5]#f5</sup> as database grows. `osrm-routed` requires at least (1.5(car) + 2.5(foot))GB for all of Canada and performance scales linearly with threads. The server itself is single threaded and RAM usage scales with number of websockets open.
9. Create an `A record` with the created instance's external IP address on [domains.google]https://domains.google.com/m/registrar/lobojane.com/dns to associate a domain name(rpc.lobojane.com) with the server
```bash
ssh root@rpc.lobojane.com
cp /etc/skel/.bashrc ~/.bashrc
nano .bashrc
```
contents:
```bash
alias nano="nano -L" #so that a file won't automatically end with a new line on save
#https://unix.stackexchange.com/a/1292 to record bash history in chronological order, not separated by session
HISTCONTROL=ignoreboth:erasedups #this should already exist, so modify
shopt -s histappend #this should already exist
PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"
#to show more information such as timestamps, and show hidden files
alias ls='ls --color=auto -lA' #this should already exist, so modify near dircolors
```
Next, to get up to date software instead of shit that is several years old:
```bash
nano /etc/apt/sources.list
```
contents:<sup id="a4">[4]#f4</sup>
```
deb http://deb.debian.org/debian unstable main contrib non-free
deb http://deb.debian.org/debian experimental main
```
<a id="update"></a>
```bash
apt update
apt -y full-upgrade
apt -y autoremove

apt -t experimental -y install libjpeg-turbo-progs #literally the only experimental package lmao so I can get v2 instead of v1.5
curl -sL https://deb.nodesource.com/setup_14.x | bash
apt -y install nodejs build-essential git imagemagick certbot postgresql-client-12 postgresql-12 postgresql-server-dev-12 libphonenumber-dev #for lobomj
apt -y install cloud-guest-utils e2fsprogs ncdu needrestart ripgrep #for management
apt -y install libboost-program-options-dev libboost-filesystem-dev libboost-iostreams-dev #for osrm-routed
certbot certonly --standalone
nano /etc/letsencrypt/renewal-hooks/deploy/restart-lobomjs.sh
```
contents:<a id="restart"></a>
```
#!/bin/sh
systemctl restart lobomj
systemctl restart lobomj_qa
```
```bash
chmod +x /etc/letsencrypt/renewal-hooks/deploy/restart-lobomjs.sh
nano /etc/postgresql/12/main/postgresql.conf #changes here need `systemctl restart postgresql` to take effect but we'll reboot later
```
comment out "port" and all lines starting with "ssl". contents:
```
listen_addresses = '' #this should already exist, so modify
```
```bash
nano /etc/postgresql/12/main/pg_hba.conf #changes here need `systemctl reload postgresql` to take effect but we'll reboot later
```
delete all entries and replace with contents:
```
local   all             all                                     trust
```
this is so that the only way to access the database is via SSH or within the instance itself. Since the only Linux user is `root`, and Postgres users and roles are not used beyond the default superuser `postgres`, and the only way to be `root` is via SSH, it's safe to "impersonate" any postgres user(`postgres`). As a side effect, Unix domain sockets are significantly [faster]https://momjian.us/main/blogs/pgblog/2012.html#June_6_2012 than TCP/IP.
```bash
nano $(pg_config --sysconfdir)/psqlrc
```
contents:
```
\pset null '¤'
\timing on
\setenv PAGER 'less -S'
```
Generate a deploy key for GitHub to clone and pull the repository
```bash
ssh-keygen
cat ~/.ssh/id_rsa.pub
git clone git@github.com:lobo-genetics/lobomj.git
git clone https://github.com/wulczer/first_last_agg.git
make -C first_last_agg install
git clone https://github.com/xocolatl/periods.git
make -C periods install
git clone https://github.com/pjungwir/range_agg.git
make -C range_agg install
git clone https://github.com/blm768/pg-libphonenumber
make -C pg-libphonenumber install
cd lobomj
npm install
systemctl reboot #there are probably updates that require a system restart anyways
ssh root@rpc.lobojane.com
psql -U postgres -f initialize.sql #If you see "FATAL: Peer authentication failed for user “postgres”", you skipped the "trust" step with pg_hba.conf. sudo -u postgres psql -f initialize.sql works as well, but you'll soon see that you can't be the postgres user to interact with the database and be a web server at port 80 and 443, (any port below 1024 https://unix.stackexchange.com/questions/16564)...
cp dotenv .env
nano .env #ssl, cloudinary, etc. 
cd ..
cp -r lobomj lobomj_qa
cd lobomj_qa
nano .env #database
# psql -U postgres -c "create database lobomj_qa with template lobomj" # or
# psql -U postgres -f initialize.sql -v name=lobomj_qa
nano /lib/systemd/system/lobomj.service
```
contents:<sup id="a6">[6]#f6</sup>
```
[Unit]
Description=lobomj server
After=network.target postgresql.service osrm-car.service osrm-foot.service
BindsTo=postgresql.service osrm-car.service osrm-foot.service

[Service]
Type=exec
WorkingDirectory=/root/lobomj
ExecStart=/usr/bin/node server.js
Restart=on-failure

[Install]
WantedBy=postgresql.service osrm-car.service osrm-foot.service
```
```bash
systemctl enable --now lobomj
cp /lib/systemd/system/lobomj.service /lib/systemd/system/lobomj_qa.service
sed -i 's/lobomj/lobomj_qa/g' /lib/systemd/system/lobomj_qa.service
systemctl enable --now lobomj_qa #scrapes should be done manually to conserve resources
nano /lib/systemd/system/lobomj-watcher.service #optional, probably more useful for qa
```
[contents:]https://superuser.com/a/1531261
```
[Unit]
Description=restart lobomj server on file changes
After=lobomj.service

[Service]
Type=oneshot
ExecStart=/usr/bin/systemctl restart lobomj.service

[Install]
WantedBy=lobomj.service
```
```bash
nano /lib/systemd/system/lobomj-watcher.path
```
contents:
```
[Path]
Unit=lobomj-watcher.service
PathChanged=/root/lobomj # trigger on changes to a file not just create/delete

[Install]
WantedBy=lobomj.service
```
```bash
systemctl enable --now lobomj-watcher.{path,service}
nano /lib/systemd/system/lobomj-scrape-summit.service
```
contents:
```
[Unit]
Description=lobomj summit scraper
After=network.target postgresql.service
BindsTo=postgresql.service

[Service]
Type=oneshot
WorkingDirectory=/root/lobomj/scrape
ExecStart=/usr/bin/node --unhandled-rejections=strict summit.js
TimeoutStartSec=10
```
```bash
nano /lib/systemd/system/lobomj-scrape-summit.timer
```
contents:
```
[Unit]
Description=scrape summit every 10 minutes

[Timer]
OnCalendar=*:0/10

[Install]
WantedBy=timers.target
```
```bash
systemctl enable --now lobomj-scrape-summit.timer
nano /lib/systemd/system/lobomj-scrape-ocs.service
```
contents:
```
[Unit]
Description=lobomj ocs scraper
After=network.target postgresql.service
BindsTo=postgresql.service

[Service]
Type=oneshot
WorkingDirectory=/root/lobomj/scrape
ExecStart=/usr/bin/node --unhandled-rejections=strict ocs.ca.js
TimeoutStartSec=600
```
```bash
nano /lib/systemd/system/lobomj-scrape-ocs.timer
```
contents:
```
[Unit]
Description=scrape ocs every hour

[Timer]
OnCalendar=*:46

[Install]
WantedBy=timers.target
```
```bash
systemctl enable --now lobomj-scrape-ocs.timer
exit
```
## OSRM
It is not feasible to compile OSRM or generate the map files on the server. It is recommended to do this on a workstation or a temporary high specced virtual machine. The requirements for Canada:
* 9GB RAM
* 7.6GB Storage
* a 3950X takes 33.5 minutes. Depending on how much time you want to waste, you can get by with a slower CPU
```bash
apt -y install build-essential git cmake pkg-config libbz2-dev libxml2-dev libzip-dev libboost-all-dev lua5.2 liblua5.2-dev libtbb-dev wget
wget http://download.geofabrik.de/north-america/canada-latest.osm.pbf &
git clone https://github.com/Project-OSRM/osrm-backend.git
cd osrm-backend
mkdir -p build
cd build
cmake ..
cmake --build . --target install -j 30 #can leave out "--target install" and execute binary from here if you want. change 30 to however many threads you want to use
fg #hopefully the download is complete. If not, come back later
cd ../..
mkdir car
mv canada-latest.osm.pbf car
cd car
osrm-extract canada-latest.osm.pbf -p ../osrm-backend/profiles/car.lua #it's a bug that osrm-extract doesn't use /usr/local/share/osrm/profiles/ as a default path
osrm-contract canada-latest.osrm
#get rid of unnecessary files https://github.com/Project-OSRM/osrm-backend/issues/1480#issuecomment-622507140
> canada-latest.osrm
rm -rf *.osrm.cnbg*
rm -rf *.osrm.ebg
rm -rf *.osrm.enw
rm -rf *.osrm.restrictions
rm -rf *.osrm.turn_penalties_index
cd ..
mkdir foot
mv car/canada-latest.osm.pbf foot
scp -r car root@rpc.lobojane.com:/root & #if you truly only have 7.6GB to spare, don't use & and wait for this copy to finish. then rm -rf car
cd foot
osrm-extract canada-latest.osm.pbf -p ../osrm-backend/profiles/foot.lua
osrm-contract canada-latest.osrm
#get rid of unnecessary files https://github.com/Project-OSRM/osrm-backend/wiki/Toolchain-file-overview
> canada-latest.osrm
rm -rf *.osrm.cnbg*
rm -rf *.osrm.ebg
rm -rf *.osrm.enw
rm -rf *.osrm.restrictions
rm -rf *.osrm.turn_penalties_index
cd ..
mv foot/canada-latest.osm.pbf .
scp -r foot root@rpc.lobojane.com:/root
scp $(which osrm-routed) root@rpc.lobojane.com:/root
fg #The car upload is probably complete since it started earlier and is smaller. If not, come back later
ssh root@rpc.lobojane.com
nano /lib/systemd/system/osrm-foot.service
```
contents:<sup id="a8">[8]#f8</sup>
```
[Unit]
Description=OSRM server (foot profile)
After=network.target

[Service]
Type=exec
ExecStart=/root/osrm-routed --ip 127.0.0.1 --port 5000 --threads 1 --max-viaroute-size -1 --max-trip-size -1 --max-table-size -1 --max-matching-size -1 --max-nearest-size -1 -a CH /root/foot/canada-latest.osrm
Restart=on-failure

[Install]
WantedBy=multi-user.target
```
```bash
systemctl enable --now osrm-foot
cp /lib/systemd/system/osrm-foot.service /lib/systemd/system/osrm-car.service
sed -i 's/foot/car/g' /lib/systemd/system/osrm-car.service
sed -i 's/5000/5001/g' /lib/systemd/system/osrm-car.service
systemctl enable --now osrm-car
exit
```
# References
<b id="f1">1</b> ["The comment is initialized to “user@host” when the key is created, but can be changed using the -c option."]https://man.openbsd.org/ssh-keygen#DESCRIPTION OpenSSH public key format is implicitly defined [here]https://man.openbsd.org/ssh-keygen#FILES "The contents of [~/.ssh/id_rsa.pub] should be added to ~/.ssh/authorized_keys" and explicitly defined [here]http://man.openbsd.org/sshd#AUTHORIZED_KEYS_FILE_FORMAT "Public keys consist of the following space-separated fields: options, keytype, base64-encoded key, comment. The options field is optional. The comment field is not used for anything (but may be convenient for the user to identify the key) The optional comment field continues to the end of the line" []#a1

<b id="f2">2</b> If you're really cheap, you can start with a HDD instead and [upgrade later]https://stackoverflow.com/a/45827963 []#a2

<b id="f3">3</b> Go to "Compute Engine" > "Disks" > [NAME OF VM INSTANCE / DISK] > "Edit". Increase to the desired size. If this is done online
```bash
growpart /dev/sda 1 
resize2fs /dev/sda1
```
or `systemctl reboot` for the changes to be applied
[]#a3

<b id="f4">4</b> ["[`deb-src` is] needed only if you want to compile some package yourself, or inspect the source code for a bug. Ordinary users don't need to include such repositories."](https://unix.stackexchange.com/q/20504)
["contrib packages contain DFSG-compliant software, but have dependencies not in main (possibly packaged for Debian in non-free). non-free contains software that does not comply with the DFSG."]https://wiki.debian.org/SourcesList#Component []#a4

<b id="f5">5</b> Changes in CPU and RAM require the instance to be shut down with `systemctl poweroff`. Then it can be changed by going "Compute Engine" > "VM Instances" > [NAME OF VM INSTANCE] > "Edit" []#a5

<b id="f6">6</b> ["PartOf" is weaker than "Requires"]https://unix.stackexchange.com/a/519230 so it is the right choice to use ["BindsTo"( > "Requires" > "Wants")]https://www.freedesktop.org/software/systemd/man/systemd.unit.html#BindsTo=. ["After" relates to ordering of dependencies. If not present, all dependencies and itself will be started in parallel.]https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Wants= ["network.target" doesn't need to be anywhere else other than "After".]https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget ["WantedBy" gives postgresql an optional dependency such that it'll try to launch lobomj, but won't fuss if lobomj fails]https://unix.stackexchange.com/a/375302 []#a6

<b id="f7">7</b> After modifying, have to `systemctl restart systemd-journald`. It's [safe]https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html#Stream%20logging []#a7

<b id="f8">8</b> The default IP address used by `osrm-routed` is `0.0.0.0`, which will listen to [all addresses]https://en.wikipedia.org/wiki/0.0.0.0 (localhost + exposed public). OSRM is only used internally, so use localhost `127.0.0.x`. If we want exposed public only, `$(hostname -I)` []#a8

# FAQ

### How do I get access to the production server?

Give your public key to someone who already has access. You'll be root so be CAREFUL. For the person who already has access, paste their public key into `/root/.ssh/authorized_keys` onto a new line.
### Does systemd start a timer when last run has failed?

Yes. Determined through testing. Timers and services are decoupled. Todo: find a source.
### Does systemd start a timer even though its service is still running?

No. If this is a problem, limit the service's `TimeoutStartSec` ["Note that in case the unit to activate is already active at the time the timer elapses it is not restarted, but simply left running. There is no concept of spawning new service instances in this case."]https://www.freedesktop.org/software/systemd/man/systemd.timer.html#Description
### How does systemd handle timers that can't be run because the server is powered off, for example?

["If [Persistent=]true, the time when the service unit was last triggered is stored on disk. When the timer is activated, the service unit is triggered immediately if it would have been triggered at least once during the time when the timer was inactive."]https://www.freedesktop.org/software/systemd/man/systemd.timer.html#Persistent=

# Maintenance

After [updating]#update, `needrestart` will prompt to restart services and hard reboot if necessary. `needrestart` can also be ran manually.

use `ncdu` to see what's consuming the most storage space and `df -h` how much is left

To check logs of a service named "lobomj": `journalctl -u lobomj`. some useful flags:
`--grep=` for searching

`--reverse` [so that the newest entries are displayed first]https://www.freedesktop.org/software/systemd/man/journalctl.html#-f

`--follow` to see entries as they happen

`-b -1` to see entries from the previous boot

`--since "20 min ago"` or `--since="2012-10-30 18:17:16"` or `--until`

Before rotated logs are deleted (condition can be configured [`/etc/systemd/journald.conf`]https://www.freedesktop.org/software/systemd/man/journald.conf.html<sup id="a7">[7]#f7</sup>) they can be backed up separately to somewhere external. Then, they can be viewed with `--file=` or `--directory=`


To check the current state of a service named "lobomj": `systemctl status lobomj`. Replace `status` with (`re`)`start`, `stop`(self explanatory)
`enable` and `disable` pertain to unit activation on bootup.

To see when the next scrape will run: `systemctl list-timers`. The status of a timer is checkable too: `systemctl status lobomj-scrape-ocs.timer` but it's not that exciting.

# Todo

* New `osrm-routed` binary or car or foot profiles should refresh the hints in the database
* Disable TLS for qa. Even though netlify forces TLS, what happens if you point to another domain and don't generate a certificate?
* Even though TLS renewal is automatic (by Let's Encrypt), https://github.com/uNetworking/uWebSockets.js/issues/65 and the nature of websockets means an existing connection cannot have the certificate changed without dropping the connection, so the server will restart whenever a new certificate is created(every 60 days). Implement a clean shutdown (reject new connections and requests, close existing connections whenever requests are finished). With just a clean shutdown and [`systemd`]#restart, the downtime will be dependant on how long it takes for the current sessions to finish. Need to simultaneously start a new instance of the server to handle new connections / requests with new certificate to minimize downtime but how do? In code? An alternative is to wait until there are zero connections and then restart. Can wait up to 30 days lmao jk don't do this gamble.
* Hot reload of event handlers to decrease downtime
* systemd slices to limit resource usage of scrapers
* GitHub webhook or git server to push to for live updates on push(diff initialize.sql and attempt to upgrade or create fresh)
* backups (was thinking rolling 3 days)
* host onsite once >= n1-standard-32 because holy fuck $777.12/month. A 3950X + 128GB RAM can be had for $1860. SSD is not bad at $.25/GB/month(like buying a shitty SSD each month and throwing it out) but performance is mediocre compared to 3D Xpoint. 20Gbps down 32Gbps up is expensive and hard to find though. SLA (power + internet) too... Backups can be stored on HDD