I recently took the plunge and started a migration over to Mastodon. In previous articles, I have discussed starting to take more ownership of my digital ecosystem starting with email. In the case of Mastodon, you have the option to host your server so I did just that.
You can follow me on Mastodon.
I opted to utilize a bare-metal server I had laying around. This guide will apply to any cloud provider such as AWS, GCP, Azure, Digital Ocean (referral link), or Vultr. If you opt to utilize local file storage as I did just make sure to utilize a block storage volume that can survive if your VM has any issues. You can also take advantage of hosted database instances if you would like.
Mastodon and Docker Compose
I usually will opt to utilize Docker and Docker Compose when running workloads on my servers. I have previously posted about this in the past. Docker compose is a great method for declaratively defining your applications and infrastructure. I started by looking at the source code of Mastodon where they provide a docker-compose template to start from here.
This is a good starting point but I made a few tweaks for clarity and brevity. Below you can see the final configuration I ended up with.
version: "3" services: # Cloudflare controls the main ingress to the mastodon container # and the streaming container below cloudflare: container_name: cloudflare restart: always image: cloudflare/cloudflared:latest networks: - mastodon command: "tunnel --config /etc/cloudflared/config.yaml run" volumes: - /tank/docker/mastodon/cloudflare:/etc/cloudflared:ro depends_on: - mastodon postgres: container_name: postgres restart: always image: postgres:14-alpine shm_size: 256mb env_file: .env.postgres networks: - mastodon healthcheck: test: ["CMD", "pg_isready", "-U", "postgres"] volumes: - /tank/docker/mastodon/postgres:/var/lib/postgresql/data redis: container_name: redis restart: always image: redis:7-alpine networks: - mastodon healthcheck: test: ["CMD", "redis-cli", "ping"] volumes: - /tank/docker/mastodon/redis:/data elasticsearch: container_name: elasticsearch restart: always image: docker.elastic.co/elasticsearch/elasticsearch:7.17.4 environment: - "ES_JAVA_OPTS=-Xms512m -Xmx512m -Des.enforce.bootstrap.checks=true" - "xpack.license.self_generated.type=basic" - "xpack.security.enabled=false" - "xpack.watcher.enabled=false" - "xpack.graph.enabled=false" - "xpack.ml.enabled=false" - "bootstrap.memory_lock=true" - "cluster.name=es-mastodon" - "discovery.type=single-node" - "thread_pool.write.queue_size=1000" networks: - mastodon healthcheck: test: [ "CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1", ] volumes: - /tank/docker/mastodon/elasticsearch:/usr/share/elasticsearch/data ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 mastodon: container_name: mastodon image: tootsuite/mastodon:v3.5.3 restart: always env_file: .env.production command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000" networks: - mastodon healthcheck: test: [ "CMD-SHELL", "wget -q --spider --proxy=off localhost:3000/health || exit 1", ] depends_on: - postgres - redis - elasticsearch volumes: - /tank/docker/mastodon/mastodon:/mastodon/public/system streaming: container_name: mastodon-streaming image: tootsuite/mastodon:v3.5.3 restart: always env_file: .env.production command: node ./streaming networks: - mastodon healthcheck: test: [ "CMD-SHELL", "wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1", ] depends_on: - postgres - redis sidekiq: container_name: sidekiq image: tootsuite/mastodon:v3.5.3 restart: always env_file: .env.production command: bundle exec sidekiq depends_on: - postgres - redis networks: - mastodon volumes: - /tank/docker/mastodon/mastodon:/mastodon/public/system healthcheck: test: ["CMD-SHELL", "ps aux | grep '[s]idekiq\ 6' || false"] networks: mastodon:
I am a huge fan of Cloudflare Tunnels. In addition to the basic hardening of my servers, I always opt to utilize a tunnel for exposing my infrastructure to the world. There was some minor configuration required here because Mastodon does have a Websocket interface but I will share the configuration below.
The first item in the ingress will match your hostname and path and send any traffic destined for the WebSocket to the streaming container running above. The second ingress will send the remainder of the traffic to our mastodon container. Finally, show a 404 if nothing matches. Note that all of this is running within the mastodon network inside of Docker and nothing is exposed publicly. I have no DNS configuration to manage and no firewall to worry about. Cloudflare handles all of this.
tunnel: <TUNNEL ID GOES HERE> credentials-file: /etc/cloudflared/credentials.json ingress: - hostname: <YOUR_HOSTNAME_GOES_HERE> path: \/api\/v1\/streaming\/ service: http://streaming:4000 - hostname: <YOUR_HOSTNAME_GOES_HERE> service: http://mastodon:3000 - service: http_status:404
I opted for local filesystem storage on my instance for all containers. All of the data lives under /tank/docker/mastodon/<CONTAINER_NAME>. Each container gets a folder to store its data. You will need to create a mastodon user and group on your machine to allow for Docker permissions to work nicely with the mastodon volume.
On my server, I am running ZFS in a mirrored setup so I can tolerate a drive failure (hopefully). I also am working on setting up regular backups which will be a topic for another article.
I really have no use for the email notifications but I went ahead and set them up anyways with SparkPost as it appears they have a free forever tier for low volume. I quickly got my email domain greylisted when the first confirmation email went out. I decided to move on and went into the docker container and used tootctl to confirm and accept my user.
I opted for a single-user instance which means I am the only person on my server. It gives me complete control over my data. There is an interactive setup wizard that will help generate your config file if you would like. You can run RAILS_ENV=production bundle exec rake mastodon:setup and it will walk you through creating your config. You then need to store this file on your host machine in a file named .env.production. Note that I also have a separate config file for Postgres that stores the database password named .env.postgres. You can combine these if you prefer.