Staging OFN v2

Welcome to the Open Food Network version 2 :tada:

From now on I think we can forget about “spree upgrade” and “2-0-stable branch” and just call it “OFN v2” or simply v2.

We need to define a strategy for staging v2. The main reason for the need is that the staging process does not rebuild the database on the server from scratch so that the data in the staging server is maintained across deploys and can be used for advanced testing.
Because v2 has a different DB schema from v1, deploying v1 and v2 on the same staging server will cause problems, more specifically when deploying v1 PRs in a server where there is a v2 DB. Are these statements correct?

If I am not mistaken currently we have 4 staging servers: uk, aus, es, fr.
I believe the best solution is to assign 2 staging servers to v2 and keep the other two for v1 PRs.
If we make this decision, the staging of v2 should be straight forward. Correct?

Thoughts, concerns?


Sounds like a super solid plan @luisramos0 :ok_hand:

That statement is not correct for the Australian staging server. The database is always reset to a baseline database dump unless we deployed master on the staging server. This setup is really good for testing v2, because it will run all the v2 migrations on each deploy ensuring that they are not broken. Our staging server runs the following script on each deploy:


set -e


master_merged() {
  git fetch -p
  git merge-base --is-ancestor HEAD FETCH_HEAD

save_staging_baseline() {
  echo "Saving baseline data"
  pg_dump -h "$DB_HOST" -U "$DB_USER" "$DB" | gzip > "$BACKUP_PATH/staging-baseline.sql.gz"

backup_and_restore_baseline() {
  echo "Backup database"
  pg_dump -h localhost -U ofn_user openfoodnetwork | gzip > "$BACKUP_PATH/staging-`date +%Y%m%d%H%M%S`.sql.gz"
  echo "Load baseline data"
  dropdb -h localhost -U ofn_user "$DB"
  createdb -h localhost -U ofn_user "$DB"
  zcat "$BACKUP_PATH/staging-baseline.sql.gz" | psql -h localhost -U ofn_user openfoodnetwork

mkdir -p "$BACKUP_PATH"

if master_merged; then

great @maikel
in the catch up meeting today, we agreed that we will use AUS staging to stage v2 PRs as needed and that we will have staging FR as the v2 reference server where only 2-0-stable or v2 PRs will be deployed.

Next step: give it a try.

done. v2 running on staging FR: :

I run db:reset manually before deployment and it just worked. It needs to be verified carefully now.

ok, looks like v2 is working fine on staging FR.

I think we need to create issues to:
1 - make the DB migration work on top of a v1 installation: installing v1 and then running db:migrate must work, right? it didn’t work this time on staging FR.
2 - make the data migration work, from v1 installation with a full set of data, and running db:migrate, we should get a working v2 database with all the existing data correctly migrated

Any comments/thoughts about this? shall I create issues?

We migrate the database already on every deploy without any distinction between data or DDL migrations. See However, the seed database task doesn’t seem to work.

So I see no reason why deploying a v2 branch would not work on a staging previously on v1 (I just did it in my dev env). If we also want to wipe out all previous data and add sample data that is something we need to fix.

data migration epic:

I think there was some problem with the FR staging DB. we need to double check in another staging server to be sure but after I dropped the db and recreated it, the deploy play-book (which includes db:migrate) worked just fine on FR staging when deploying 2-0-stable on top of master.

So, my tests with db:migrate from v1 to v2 are successful and that includes db:seed. @sauloperez can you please add details about the problem you see in seeding the db?

I meant the seed database ansible task, sorry. It did a brain dump there :smile:

v2 complete. archiving.