1
0
Fork 0

Initial commit

Signed-off-by: Julien Riou <julien@riou.xyz>
This commit is contained in:
Julien Riou 2024-12-22 08:26:17 +01:00 committed by Julien Riou
commit f0cab2f7c3
Signed by: jriou
GPG key ID: 9A099EDA51316854
54 changed files with 19984 additions and 0 deletions

9
content/_index.markdown Normal file
View file

@ -0,0 +1,9 @@
---
title: Home
---
<div class="center">
<img src="/imgs/profile.jpg" class="rounded" alt="Profile pic" />
[Mastodon](https://hachyderm.io/@jriou) | [Code](https://git.riou.xyz/jriou) | [LinkedIn](https://www.linkedin.com/in/jriou) | [E-mail](mailto:julien@riou.xyz) | [PGP](https://keys.openpgp.org/vks/v1/by-fingerprint/458D536DEE2B11404BC4BF6E9A099EDA51316854)
</div>

View file

@ -0,0 +1,304 @@
---
title: PostGIS and the Hard Rock Cafe collection
date: 2024-05-27T18:00:00+02:00
categories:
- postgresql
---
In 2007, I went to the west coast of the United States where I visited a Hard
Rock Cafe store in Hollywood CA and bought the first T-Shirt of my collection.
Now, I'm an open-source DBA and my number one database is
[PostgreSQL](https://www.postgresql.org/). So I've decided to use the
[PostGIS](https://postgis.net/) extension to plan my next vacations or
conferences to buy more T-Shirts.
# The collection
## Inventory
Let's do the inventory of my collected T-Shirts by looking at my closet.
![](/hrc/t-shirts.jpg)
The titles represent a location, generally a city but it could be a known place
like a stadium. This will be enough to find the related shop in a database.
Titles can be written into a file:
```
Dublin
Los Angeles
Prague
London
```
([collection.csv](/hrc/collection.csv))
## Coordinates
The next step is to add coordinates. This information can be found by querying
the [Nominatim](https://nominatim.org/) API based on the
[OpenStreetMap](https://www.openstreetmap.org) community-driven project.
```python
#!/usr/bin/env python3
import requests
import csv
import time
if __name__ == "__main__":
headers = {"User-Agent": "Hard Rock Cafe Blog Post From Julien Riou"}
session = requests.Session()
session.headers.update(headers)
with open("collection_with_coordinates.csv", "w") as dest:
writer = csv.writer(dest)
with open("collection.csv", "r") as source:
for row in csv.reader(source):
name = row[0]
r = session.get(f"https://nominatim.openstreetmap.org/search?q={name}&limit=1&format=json")
time.sleep(1)
r.raise_for_status()
data = r.json()
if len(data) == 1:
data = data[0]
writer.writerow([name, data["lon"], data["lat"]])
else:
print(f"Location not found for {title}, skipping")
```
([collection.py](/hrc/collection.py))
The Python script iterates over the `collection.csv` file to query the
Nominatim API to find the most relevent OpenStreetMap node then writes
coordinates in the `collection_with_coordinates.csv` file using the
coma-separated values (CSV) format.
```
Dublin,53.3493795,-6.2605593
Los Angeles,34.0536909,-118.242766
Prague,50.0596288,14.446459273258009
London,51.4893335,-0.14405508452768728
```
([collection_with_coordinates.csv](/hrc/collection_with_coordinates.csv))
# The shops (or "nodes")
Now we need a complete list of Hard Rock Cafe locations with their coordinates
to match the collection.
## OpenStreetMap
My first idea was to use OpenStreetMap that should provide the needed dataset.
I tried to use the Nominatim API but the queries are [limited to 40
results](https://nominatim.org/release-docs/latest/api/Search/#parameters). I
could [download the entire
dataset](https://wiki.openstreetmap.org/wiki/Downloading_data), import it in a
[PostgreSQL instance](https://osm2pgsql.org/) locally but it would have been
space and time consuming. So, I used the [Overpass
API](https://wiki.openstreetmap.org/wiki/Overpass_API) with [this
query](https://overpass-turbo.eu/s/1LwQ) (thanks
[Andreas](https://andreas.scherbaum.la/)). In the end, the quality of the data
was not satisfying. The amenity was restaurant, bar, pub, cafe, nightclub or
shop. The name had an accent ("é") or not ("e"). Sometimes, the brand was not
reported. Even with all those filters, there was a
[node](https://www.openstreetmap.org/node/6260098710) that was not even a Hard
Rock Cafe. The more the query grew, the more I wanted to use another method.
## Website
I decided to parse the official website. By using a well-known library like
[Selenium](https://selenium-python.readthedocs.io/) or
[ferret](https://www.montferret.dev/)? Given the personal time I had for this
project, I've chosen the quick and dirty path. Let me present you the ugly but
functional one-liner to parse the official Hard Rock Cafe website:
```
curl -sL https://cafe.hardrock.com/locations.aspx | \
grep 'var currentMapPoint=' | \
sed "s/.*{'title':/{'title':/g;s/,'description.*/}/g;s/'/\"/g" | \
sed 's/{"title"://g;s/"lat":/"/g;s/,"lng":/","/g;s/}/"/g' \
> nodes.csv
```
Very ugly, not future-proof, but it did the job.
```
"Hard Rock Cafe Amsterdam","52.36211","4.88298"
"Hard Rock Cafe Andorra","42.507707","1.531977"
"Hard Rock Cafe Angkor","13.35314","103.85676"
"Hard Rock Cafe Asuncion","-25.2896910","-57.5737599"
```
([nodes.csv](/hrc/nodes.csv))
# Data exploration
The tool of choice to import and analyze this data is PostgreSQL and its
PostGIS extension. I've used [Docker](https://www.docker.com/) to have a
disposable local instance to perform quick analysis.
```
docker run -d --name postgres -v "$(pwd):/mnt:ro" \
-e POSTGRES_USER=hrc -e POSTGRES_PASSWORD=insecurepassword \
-e POSTGRES_DB=hrc \
postgis/postgis:16-3.4
docker exec -ti postgres psql -U hrc -W
```
The instance is now started and we are connected.
## Import
The [COPY](https://www.postgresql.org/docs/current/sql-copy.html) command on
PostgreSQL can import CSV lines easily. We'll use the psql alias (`\copy`) to
send data directly through the client.
```
create table collection (
name text primary key,
lat numeric,
lon numeric
);
\copy collection (name, lat, lon) from '/mnt/collection_with_coordinates.csv' csv;
create table nodes (
name text primary key,
lat numeric,
lon numeric
);
\copy nodes (name, lat, lon) from '/mnt/nodes.csv' delimiter ',' csv;
```
## Correlation
The SQL query takes all the rows from the `collection` table and try to
find a row in the `nodes` table within 50 km based on coordinates.
```
select c.name as tshirt, n.name as restaurant,
round((ST_Distance(ST_Point(c.lon, c.lat), ST_Point(n.lon, n.lat), true)/1000)::numeric, 2)
as distance_km
from collection c
left join nodes n
on ST_DWithin(ST_Point(c.lon, c.lat), ST_Point(n.lon, n.lat), 50000, true)
order by c.name, distance_km;
```
The PostGIS functions used are:
* [ST_Point](https://postgis.net/docs/ST_Point.html) to create a point in space with coordinates
* [ST_Distance](https://postgis.net/docs/ST_Distance.html) to compute the distance between two points
* [ST_DWithin](https://postgis.net/docs/ST_DWithin.html) to filter only rows with a distance less or equal than the provided value
Result:
```
tshirt | restaurant | distance_km
------------------+--------------------------------------------+-------------
Amsterdam | Hard Rock Cafe Amsterdam | 1.38
Angkor | Hard Rock Cafe Phnom Penh | 0.75
Antwerp | Hard Rock Cafe Brussels | 41.84
Barcelona | Hard Rock Cafe Barcelona | 0.64
Berlin | Hard Rock Cafe Berlin | 4.32
Boston | (null) | (null)
Brussels | Hard Rock Cafe Brussels | 0.10
Detroit | (null) | (null)
Dublin | Hard Rock Cafe Dublin | 0.40
Hamburg | Hard Rock Cafe Hamburg | 2.36
Ho Chi Minh City | (null) | (null)
Hollywood | Hard Rock Cafe Hollywood on Hollywood Blvd | 1.07
Lisbon | Hard Rock Cafe Lisbon | 1.07
London | Hard Rock Cafe London | 1.66
London | Hard Rock Cafe London Piccadilly Circus | 2.40
Los Angeles | Hard Rock Cafe Hollywood on Hollywood Blvd | 10.46
Miami | Hard Rock Cafe Miami | 0.97
Miami | Hard Rock Cafe Hollywood FL | 30.83
Miami Gardens | Hard Rock Cafe Hollywood FL | 12.62
Miami Gardens | Hard Rock Cafe Miami | 19.26
New York | Hard Rock Cafe New York Times Square | 5.18
New York | Hard Rock Cafe Yankee Stadium | 14.53
Orlando | Hard Rock Cafe Orlando | 11.50
Oslo | (null) | (null)
Paris | Hard Rock Cafe Paris | 2.14
Prague | Hard Rock Cafe Prague | 3.59
San Francisco | Hard Rock Cafe San Francisco | 3.36
Singapore | Hard Rock Cafe Singapore | 5.74
Singapore | Hard Rock Cafe Changi Airport Singapore | 18.88
Singapore | Hard Rock Cafe Puteri Harbour | 19.33
Yankee Stadium | Hard Rock Cafe Yankee Stadium | 0.14
Yankee Stadium | Hard Rock Cafe New York Times Square | 9.53
```
We can identify multiple patterns here:
* exact match
* closed restaurants (Boston, Detroit, Ho Chi Minh City, Oslo)
* multiple restaurants (Miami, Miami Gardens, Singapore, New York, Yankee Stadium)
* multiple T-Shirts (Miami, Los Angeles)
* wrong match (Angkor, Antwerp)
* missed opportunities (Hollywood FL, Piccadilly Circus)
I've created a [script](/hrc/update.sql) to update the names in the
`collection` table to match the names in the `nodes` tables to join them by names
instead of the location.
# Next locations
The last step of the exploration is to find Hard Rock Cafe locations within a
reasonable distance from home (1000 Km). As I don't want to disclose the exact
position, we'll search for "Belgium". The country is quite small so that should
not be an issue.
```
$ curl -A "Hard Rock Cafe Blog Post From Julien Riou" \
-sL "https://nominatim.openstreetmap.org/search?limit=1&format=json&q=Belgium" | \
jq -r '.[0].name,.[0].lon,.[0].lat'
België / Belgique / Belgien
4.6667145
50.6402809
```
The query to search for shops that I don't have already visited looks like
this:
```
select n.name,
round((ST_Distance(ST_Point(n.lon, n.lat), ST_Point(4.6667145, 50.6402809), true)/1000)::numeric, 2)
as distance_km
from nodes n
left join collection c
on n.name = c.name
where c.name is null
and ST_Distance(ST_Point(n.lon, n.lat), ST_Point(4.6667145, 50.6402809), true)/1000 < 1000
order by distance_km;
```
The final result:
```
name | distance_km
-----------------------------------------+-------------
Hard Rock Cafe Cologne | 164.87
Hard Rock Cafe London Piccadilly Circus | 349.99
Hard Rock Cafe Manchester | 569.37
Hard Rock Cafe Munich | 573.42
Hard Rock Cafe Innsbruck | 618.90
Hard Rock Cafe Newcastle | 640.64
Hard Rock Cafe Milan | 666.41
Hard Rock Cafe Copenhagen | 769.51
Hard Rock Cafe Edinburgh | 789.31
Hard Rock Cafe Venice | 812.96
Hard Rock Cafe Wroclaw | 870.89
Hard Rock Cafe Vienna | 890.23
Hard Rock Cafe Florence | 911.37
Hard Rock Cafe Gothenburg | 918.32
Hard Rock Cafe Andorra | 935.20
```
# Conclusion
According to this study, my next vacations or conferences should take place in
Germany or UK. A perfect opportunity to go to
[PGConf.DE](https://2024.pgconf.de/) and [PGDay UK](https://pgday.uk/)!

View file

@ -0,0 +1,129 @@
+++
title = "So I've self-hosted my code using Forgejo"
date = 2024-12-21T12:00:00+01:00
+++
The open source philosophy is often reduced to the the source code of a
software that is available somewhere published under a permissive license. That
somewhere is mostly [GitHub](https://github.com/), using
[Git](https://git-scm.com/) as a source code management tool. GitHub provides a
centralized place for everyone to contribute to open source projects. While
this is a good boost for them, having this giant place using your code to train
AI models or even providing [AI product that we don't need for free if you are
an open source
maintainer](https://www.msn.com/en-us/technology/tech-companies/github-is-making-its-ai-programming-copilot-free-for-vs-code-developers/ar-AA1w9UrA),
is not aligned with my values.
The second problem I have with GitHub is that when I was a student at the
university, I created an [account](https://github.com/riouj) using my former
handle (riouj), my student e-mail and my french phone number. Years later, I
tried to recover this account but having lost access to both recovery methods,
GitHub support said "Nope", even if I can prove my identity and provide my
diploma. My professional handle (jriou) is used by [someone
else](https://github.com/jriou). So I've created an
[account](https://github.com/jouir) using a handle that I used on some forums
(jouir) which is not very professional if you speak french. Moving somewhere
else will allow me to use my regular nickname, finally!
Alright, now where should I move my code?
There are multiple services online to store your code like
[bitbucket.org](https://bitbucket.org/). We use Atlassian products at work so
why not giving their online service a try on their free tier? There's also
[gitlab.com](https://about.gitlab.com/) which is famous to be one of the
biggest alternative to GitHub. That would also mean my code will be hosted by
another US corporation. Then I heard about [Gitea](https://about.gitea.com/)
that was taken over by a for-profit company and the
[Forgejo](https://forgejo.org/) fork backed by
[Codeberg](https://codeberg.org/), a non-profit organization based in Germany,
in the EU. I could push my code to a service managed by an association sharing
my values...
Or, I could deploy the free and open source software (FOSS) directly on one of
my homelab servers! Exposing the source code of my personal projects should not
use that much of resources, especially bandwidth, and should not be sensitive
to latency, right? Let's find out.
# The setup
My hosts rely on a home-made backup solution based on ZFS replicated to three
locations. Everything is explained in my [Journey of a Home-based Personal
Cloud Storage Project](https://julien.riou.xyz/socallinuxexpo2024.handout.html)
talk and [self-hosting](https://self-hosting.riou.xyz/) blog. I've taken the
server with the most bandwidth to host the Forgejo instance. As I use Ansible
to manage my personal infrastructure, I've created an [Ansible
role](https://git.riou.xyz/jriou/ansible-role-forgejo) to manage Forgejo using
docker compose. The [official
documentation](https://forgejo.org/docs/next/admin/installation-docker/) is
simple and easy. In a matter of minutes, my instance was up and running!
In order to expose the instance to the public and share my software
contributions to the world, I have some components that are not self-hosted: a
domain name and a virtual private server (VPS) to route the traffic to my home
network hosting OpenVPN and Nginx. I should try
[tailscale](https://tailscale.com/) one day but that's another topic.
![Forgejo](/forgejo.svg)
The HTTPS exposition is pretty easy with Nginx. There are plenty of
documentations everywhere for that purpose. For SSH, which is TCP, I've used
nginx streams:
```
load_module /usr/lib/nginx/modules/ngx_stream_module.so;
stream {
server {
listen 222;
proxy_pass IP.OF.VPN.INSTANCE:222;
}
}
```
I tried to use iptables for forwarding the SSH port to the private instance but
failed miserably. The Nginx stream solution is much easier! Don't forget to
allow the incoming port on the VPS. After years of experience, I fell into this
trap and spent at least one hour debugging why this damn Nginx stream
configuration was not working.
And the website is live, ready to receive my code!
# Code migration
My code is not very popular. I mostly have archived repositories. My maintained
repositories have little to no issues. I don't use GitHub actions (yet). And I
have less than 20 repositories. So the migration was pretty simple:
1. Create repository on Forgejo including the description
1. Disable what I don't use (wiki, releases, projects, actions, etc)
1. Add "forgejo" remote on git
1. Push everything including tags to the "forgejo" remote
1. Rename "forgejo" git remote by origin
1. Delete repository from GitHub
As far as I know, there's no way to force ordering of your repositories on
Forgejo like you could have on GitHub with pinned repositories. So if you would
like to order your repositories when your visitors will land onto your profile
page, you should create them from the oldest to the newest which is the default
ordering on Forgejo. I don't care about the order personally so I took them in
a "first seen, first migrated" fashion. The git history is respected though.
# What's next
The basic setup is done but there's still work to do like setting up local
actions to ensure code quality.
# Conclusion
Now I have all my repositories, on my own infrastructure, [publicly
available](https://git.riou.xyz/jriou), running entirely on FOSS, and this is
beautiful.
![Forgejo screenshot](/forgejo-screenshot.png)
I would like to thank the Forgejo contributors and the Codeberg organization
for their amazing work to provide an open source self-hosted alternative to
GitHub. The best way to really thank them is to [donate
regularly](https://donate.codeberg.org/) (which I'm proud to do).

View file

@ -0,0 +1,441 @@
---
title: Yubikey for personal use
date: 2024-08-07T07:45:00+02:00
categories:
- yubikey
---
At work, we use SSH to connect to our infrastructure using
[PIV](https://developers.yubico.com/PIV/Guides/SSH_with_PIV_and_PKCS11.html)
and a [Yubikey](https://www.yubico.com/) to comply with the PCI DSS standard.
I've seen a post from [Christian
Stankowic](https://chaos.social/@stdevel/112490125694988342) (aka "stdevel") on
Mastodon showing a Yubikey for personal use, so I decided to give it a try.
This blog post is the result of how I use a Yubikey outside of work.
![](/yubikey/picture.jpg)
Parts:
- [YubiKey 5 NFC (USB-A)](https://www.yubico.com/be/product/yubikey-5-nfc/)
- [Lanyard](https://www.yubico.com/be/product/yubico-keyport-parapull-lanyard/)
- [Double Rainbow Cover](https://www.yubico.com/be/product/yubistyle-covers-usb-a-c-nfc/)
I'm not paid to promote these products.
# Disclaimer
Modifying security keys may be dangerous. I cannot be responsible of any data
loss that may have been caused. Use the commands with caution.
# Requirements
I'm running on Ubuntu 24.04 (Noble Numbat) at the time of writing. The easiest
way to install Yubikey Manager to set up the Yubikey is to use the PPA.
```
sudo add-apt-repository ppa:yubico/stable
sudo apt-get update
sudo apt-get install yubikey-manager
ykman --version
```
If you have another system, please [follow the official
documentation](https://docs.yubico.com/software/yubikey/tools/ykman/).
# SSH with FIDO2
At home, I self-host multiple services from a [distributed file
storage](/socallinuxexpo2024.handout.html),
[finances](https://www.firefly-iii.org/) to my home lab. They run on various
hosts accessible via SSH. But instead of relying on PIV, like at work, I wanted
to try something more secure and supposed to be easy to use:
[FIDO2](https://developers.yubico.com/SSH/Securing_SSH_with_FIDO2.html).
The components:
![](/yubikey/yubikey-ssh.png)
In short, it's a regular public and private key system, using [eliptic-curve
cryptography](https://en.wikipedia.org/wiki/Elliptic-curve_cryptography)
("ECC"), but instead of storing the private key on on your client, an _access_
key (with "sk" suffix for "Security Key") is used to access the private key on
the Yubikey.
Like regular SSH keys, a **password** can protect the private key itself.
With FIDO2 on the Yubikey, there are two more security mechanisms to be aware
of:
- **PIN**: alphanumeric password, special chars allowed, that must be **at
least** 4 chars long
- **Touch**: security that requires you to touch the "Y" area of the Yubikey
with your finger to validate your physical presence
Note that the minimum version of OpenSSH with FIDO2 support is 8.2, which means
at least Debian 11. If you have older versions around, it's time to upgrade!
## PIN
By default, the FIDO PIN is not defined. You should generate a strong PIN from
a password manager like [KeepPassXC](https://keepassxc.org/).
```
read -s PIN
echo -ne $PIN | wc -c
ykman fido access change-pin --new-pin "${PIN}"
```
## SSH keys
The next step is to create a pair of SSH keys. This operation has to be
repeated for each client. The Yubikey is able to store multiple private keys.
One private key should not be re-used by multiple clients because that would
mean the _access_ key, which must stay private, on the client
(`~/.ssh/id_ed25519_sk`) used to unlock the private key on the Yubikey, has to
be moved around, which is a bad practice.
I've chosen the [Ed25519](https://en.wikipedia.org/wiki/EdDSA) algorithm
instead of
[ECSDA](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm)
because interoperability is not an issue for me, all my systems are up-to-date
and compatible. Both are strong options. Both are not
[RSA](https://en.wikipedia.org/wiki/RSA_(cryptosystem)).
```
ssh-keygen -t ed25519-sk -O resident -O verify-required \
-O "application=ssh:$(hostname -f)" \
-C "$(hostname -f)"
```
Enter the **PIN**, **touch** the Yubikey then set a **password** to protect the
SSH key.
Finally copy the public key to the remote hosts (file
`~/.ssh/id_ed25519_sk.pub`).
## SSH agent
The main problem is that we need to pass 3 security tests for every single SSH
connection:
1. Enter the password of the SSH key
1. Enter the FIDO2 PIN of the Yubikey
1. Touch the Yubikey
I know this is security but it's not very user-friendly. So I've tried to use
an SSH agent to "cache" the private key for a limited amount of time, like I do
for regular keys and even with PIV.
Let's try to add the identities to the agent:
```
$ ssh-add -K
Enter PIN for authenticator:
Resident identity added: ED25519-SK SHA256:***
$ ssh-add -l
256 SHA256:*** (ED25519-SK)
```
Then connect:
```
$ pilote
sign_and_send_pubkey: signing failed for ED25519-SK "/home/jriou/.ssh/id_ed25519_sk" from agent: agent refused operation
```
Unfortunately, the SSH agent doesn't seem to load the FIDO keys properly. To
fix this issue, you can disable the agent in file `~/.ssh/config`:
```
Host *
IdentityAgent none
```
You still need to pass the 3 security checks but, at least, you can connect.
I use [Ansible](https://github.com/ansible/ansible) to manage most of my
personal infrastructure. With FIDO2, you can multiply the number of security
checks by the number of managed host. This is a nightmare. Unfortunately, I've
chosen to create a regular ed25519 key pair and use it only for Ansible from
one host that will never leave my secure house.
# FIDO U2F to replace OTPs
This [open authentication
standard](https://www.yubico.com/authentication-standards/fido-u2f-standard/)
enables you add an additional security check based on a hardware key, to log on
a website for example. Traditionally, you have to enter your user name and your
password. You can add [two-factor
authentication](https://en.wikipedia.org/wiki/Multi-factor_authentication) with
an application generating one-time passwords ("OTP"). They are generated for a
limited amount of time. I have way too many OTPs registered in my
[FreeOTP](https://freeotp.github.io/) application. FIDO U2F is a way to replace
your OTP by touching your Yubikey. As simple as that.
It's supported by plenty of platforms including (but not limited) Mastodon,
Github and Linkedin. The workflow to setup a hardware key goes often like this:
1. Go to Settings / Security or Account
1. Look for 2FA or MFA
1. Setup OTP (even if you won't use it)
1. Search for hardware key
1. Enter the FIDO PIN
1. Touch the Yubikey
The interface will be different for each platform but the workflow should be
the same.
# PGP
Nowadays, communication systems provide transparent [end-to-end
encryption](https://en.wikipedia.org/wiki/End-to-end_encryption). But it's not
always enabled by default (I see you [Telegram](https://telegram.org/)). Do you
trust those providers claiming they are not able to decrypt your messages?
Where are my keys? Can I bring my own? _It's encrypted, trust me bro_.
Technically, Pretty Good Privacy ("PGP") solves this issue. I can generate my
own set of keys to encrypt or authenticate data using public key cryptography.
[ECC](https://www.rfc-editor.org/rfc/rfc6637) is supported. The **private key**
can be protected by the **Yubikey**.
The only problem is to find people like me, believing in privacy, able to use
PGP to encrypt communications. In our daily lives, we talk mostly to
non-technical people that don't know a word about how cryptography works or
even how computers work. E-mails are less and less used. Which means I don't
use PGP to encrypt my messages very often, except at work.
Another usage of PGP is to encrypt **sensible files**. In that case, storing
the private key on a Yubikey is a good way to make it more secure. But at the
same time, if you loose the key, you will never be able to decrypt the original
files. You can still export the private key and store it somewhere secure. But
how secure is it? Here it comes the endless loop of paranoia to encrypt the key
of the key of the key of the... You get it. I've chosen to protect the exported
private key by a password stored on my password manager which has its own
password that I know in my head. The security stops if I forget the password
manager's password, and it's ok.
I also use PGP to **sign my git commits**.
## Access
There are two differents PINs for the PGP application on the Yubikey:
- **Admin PIN**: to unlock PGP administration commands
- **PIN**: to unlock the PGP private key
The PIN is specific to the PGP application. If you have already set up a FIDO
PIN, please chose a different one for PGP.
Unlike FIDO, default PINs are set from factory. Run the "reset" operation only
if you don't know the current PIN. Be careful, this is a **destructive**
operation. Any existing PGP key will be removed.
```
ykman openpgp reset
```
Change the Admin PIN:
```
ykman openpgp access change-admin-pin --admin-pin 12345678
```
Change the PIN:
```
ykman openpgp access change-pin --pin 123456
```
## Generate key
Let's generate the key pair locally, then move the private key to the Yubikey
afterwards.
```
gpg --expert --full-gen-key
Please select what kind of key you want:
(9) ECC (sign and encrypt) *default*
Your selection? 9
Please select which elliptic curve you want:
(1) Curve 25519 *default*
Your selection? 1
Please specify how long the key should be valid.
Key is valid for? (0) 1y
Is this correct? (y/N) y
```
Enter real name, e-mail address and eventually a comment.
Validate:
```
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
```
Then move key to the Yubikey:
```
gpg --edit-key 0x1234
gpg> keytocard
```
Enter the **PGP password**, the one to protect the private key itself, then the
**Admin PIN** to store it on the Yubikey.
## Publish to keyserver
How to discover public keys from other people?
1. Participate to a [key signing
party](https://en.wikipedia.org/wiki/Key_signing_party) (at
[FOSDEM](https://fosdem.org) for example)
1. Use a **keyserver**, because we cannot meet everybody in person
The [keys.openpgp.org](https://keys.openpgp.org/) server seems to be a popular
option to publish your public key. It verifies your identity by sending an
e-mail to every PGP identity (= e-mail) included in the published key.
```
gpg --export 0x1234 | curl -T - https://keys.openpgp.org
```
## Info on the card
The gpg binary enables you to store more information like your full name and
the URL of your public key, directly on the Yubikey. Feel free to set them up
or not. Anyone grabbing your Yubikey will be able to retreive such information.
```
gpg --card-edit
gpg/card> admin
gpg/card> name
```
Enter your **last** name, **first** name, then the Admin PIN to save the
modification.
For the URL:
```
gpg/card> url
```
Enter the public key URL (ex:
`https://keys.openpgp.org/vks/v1/by-fingerprint/1234`), then the Admin PIN to
save the modification.
Verify:
```
gpg --card-status
```
This commands tells you if your Yubikey has been detected by the gpg
application by showing the card details. Sometimes, you could have the
following error:
```
gpg: OpenPGP card not available: General error
```
In that case, you should unplug then plug your Yubikey.
<p style="text-align: center">
<img
src="/yubikey/the-it-crowd-meme.jpg"
alt="The IT Crowd (TV show) meme saying 'Have you tried to turning it off and on again?'"
/>
</p>
## Renew an expired key
It's important to set an expiration duration because if you lose control over
your key, it will expire automatically. I've performed this
[operation](https://superuser.com/questions/813421/can-you-extend-the-expiration-date-of-an-already-expired-gpg-key/1141251#1141251)
a couple of times over the years.
Edit the main key:
```
gpg --edit-key 0x1234
gpg> expire
Key is valid for? (0) 1y
Is this correct? (y/N) y
```
Enter the PIN to unlock the Yubikey.
Then edit the sub key:
```
gpg> key 1
gpg> expire
Key is valid for? (0) 1y
Is this correct? (y/N) y
```
Enter the PIN to unlock the Yubikey.
Save and publish:
```
gpg> save
gpg --keyserver keys.openpgp.org --send-keys 0x1234
```
This doesn't prevent from compromision. In that case, you should [revoke the
key](https://superuser.com/questions/1526283/how-to-revoke-a-gpg-key-and-upload-in-gpg-server).
## Sign git commits
[Git](https://git-scm.com/) is a very popular [version
control](https://en.wikipedia.org/wiki/Version_control) software used by
open-source projects to distribute code and accept contributions. Show the
world that you own your git commits! With PGP, you can sign your commits so
everybody can verify that you are effectively the one and true author of the
commit, or your private key has been compromised but that's another topic.
File `~/.gitconfig`:
```
[user]
email = first@name.email
name = First Name
signingkey = 0x1234
```
Every time you will try to commit a change using git, you will have to unlock
your PGP private key on the Yubikey by entering the PIN.
Finally, add your public key to your [Github
account](https://docs.github.com/en/authentication/managing-commit-signature-verification/adding-a-gpg-key-to-your-github-account)
so everyone will see the check mark on your commits.
<p style="text-align: center">
<img
src="/yubikey/github-signed-commit.png"
alt="Screenshot from Github with 'jouir committed 2 weeks ago' + 'Verified'"
/>
</p>
# Sudo
The Yubikey can be used to secure [sudo](https://en.wikipedia.org/wiki/Sudo),
the software used to execute commands with root privileges. You can decide to
**replace** your password by a touch on the Yubikey ("passwordless"), or
**add** a touch on the Yubikey after the password ("2FA"). [This blog
post](https://dev.to/bashbunni/set-up-yubikey-for-passwordless-sudo-authentication-4h5o)
describes the procedure to do both.
I've tried to setup a **passwordless** authentication for sudo but I'm still
asked for my password from time to time. And when the Yubikey requires to be
touched, sudo doesn't print any instruction on the terminal, the Yubikey starts
to blink. The sudo command is not stuck, you just have to touch the Yubikey.
# Conclusion
Is my digital life more secure now? Yes, probably. But security comes at a
cost. The cost of entering two passwords and touch the Yubikey for **every
single SSH connection** (maybe using SSH with PIV is easier after all). The
cost of encrypting messages with PGP to non tech-savvy people. The cost of
monitoring the expiration date of your PGP keys. The cost of the Yubikey
itself. On the bright side, replacing vicious OTPs that regenerate too quickly
by a simple touch is very nice! Same for sudo. In the end, it was a fun project
and that's what matters.

BIN
content/imgs/profile.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

33
content/talks.md Normal file
View file

@ -0,0 +1,33 @@
---
title: Talks
---
# 2024
* 2024-09-13: [Efficient Time Series Management with TimescaleDB at OVHcloud](/pgdaynl2024.html) ([transcript](/pgdaynl2024.handout.html), [video](https://www.youtube.com/watch?v=l2CiGydVXgI)) @ [PGDay Lowlands 2024](https://www.postgresql.eu/events/pgdaynl2024/schedule/session/5614-efficient-time-series-management-with-timescaledb-at-ovhcloud/)
* 2024-06-12: [PostGIS et la collection Hard Rock Cafe](/pgdayfr2024-hardrockcafe.html) (FR) @ [PG Day France 2024](https://pgday.fr) (lightning talk)
* 2024-06-12: [Trucs et astuces pour TimescaleDB](/pgdayfr2024-timescaledb.html) (FR) @ [PG Day France 2024](https://pgday.fr) (lightning talk)
* 2024-03-16: [Journey of a Home-based Personal Cloud Storage Project](/socallinuxexpo2024.html) ([transcript](/socallinuxexpo2024.handout.html), [video](https://www.youtube.com/watch?v=VYAdTSkDNjk)) @ [SCaLE 21x](https://www.socallinuxexpo.org/scale/21x/presentations/journey-home-based-personal-cloud-storage-project)
* 2024-03-15: [Database schema management for lazybones: from chaos to heaven](/postgresatscale2024.html) ([transcript](/postgresatscale2024.handout.html), [video](https://www.youtube.com/watch?v=cD79bTGh6Zc)) @ [PostgreSQL@SCaLE 21x](https://www.socallinuxexpo.org/scale/21x/presentations/database-schema-management-lazybones-chaos-heaven)
* 2024-02-13: [Real-Time Feeding of a Data Lake with PostgreSQL and Debezium](/meetupbe2024.html) ([transcript](/meetupbe2024.handout.html)) @ [PostgreSQL Users Group Belgium](https://www.meetup.com/postgresbe/events/298633135/)
* 2024-02-06: [Automating Internal Databases Operations at OVHcloud with Ansible](/cfgmgmtcamp2024.html) ([transcript](/cfgmgmtcamp2024.handout.html), [video](https://www.youtube.com/live/I2FaIdC5Rus?si=tdmqQMuItsigF3sl&t=3587)) @ [Config Management Camp 2024](https://cfp.cfgmgmtcamp.org/2024/talk/P3NKVG/)
* 2024-01-25: [Alimentation d'un Data Lake en temps réel grâce à PostgreSQL et Debezium](/meetuplille2024.html) (FR) ([transcript](/meetuplille2024.handout.html)) @ [Meetup PostgreSQL Lille](https://www.meetup.com/meetup-postgresql-lille/events/298087677/)
# 2022
* 2022-09-23: [The Elephantine Upgrade](/pgconfnyc2022.pdf) ([video](https://www.youtube.com/watch?v=TvjaD1GjQvA)) @ [PGConf NYC 2022](https://postgresql.us/events/pgconfnyc2022/sessions/session/998-the-elephantine-upgrade/)
* 2022-06-22: [Automatisation d'une mise à jour éléphantesque](/pgdayfrance2022.pdf) (FR) ([video](https://www.youtube.com/watch?v=Rlrt4LpieW0)) @ [PG Day France 2022](https://2022.pgday.fr/programme)
# 2021
* 2021-02-06: [Databases schema management for lazybones: from chaos to heaven](/fosdem21.pdf) ([video](https://video.fosdem.org/2021/D.postgresql/postgresql_database_schema_management_for_lazybones_from_chaos_to_heaven.webm)) @ [FOSDEM 21](https://fosdem.org/2021/schedule/event/postgresql_database_schema_management_for_lazybones_from_chaos_to_heaven/)
# 2020
* 2020-10-21: [Security and Traceability on Distributed Database Systems](/perconaliveonline2020.pdf) ([video](https://www.youtube.com/watch?v=3uoA4FlGwr4)) @ [Percona Live Online 2020](https://perconaliveonline2020.sched.com/event/ePpB)
* 2020-03-05: [Databases schema management for lazybones: from chaos to heaven](/meetupbe2020.pdf) @ [PostgreSQL Users Group Belgium](https://www.meetup.com/postgresbe/events/268772486/)
# 2019
* 2019-09-12: [Upgrade all the things!](/postgresopen2019.pdf) ([video](https://www.youtube.com/watch?v=53eiI_31ZVY)) @ [PostgresOpen 2019](https://postgresql.us/events/pgopen2019/schedule/session/613-upgrade-all-the-things/)
* 2019-06-19: [Dans les coulisses d'une infrastructure hautement disponible](/pgdayfrance2019.pdf) (FR) ([video](https://www.youtube.com/watch?v=DOU5xgrP3PI)) @ [PG Day France 2019](https://2019.pgday.fr/programme)