Modifying database schemata will need database migrations (even for adding and removing tables). To autogenerate migrations:
$ docker-compose run web python -m warehouse db revision --autogenerate --message "text"
Verify your migration was generated by looking at the output from the command above:
Generating /opt/warehouse/src/warehouse/migrations/versions/390811c1c\ dbe_.py ... done
Then migrate and test your migration:
$ docker-compose run web python -m warehouse db upgrade head
Migrations are automatically run as part of the deployment process, but prior
to the old version of Warehouse from being shut down. This means that each
migration must be compatible with the current
main branch of Warehouse.
This makes it more difficult to make breaking changes, since you must phase them in over time (for example, to rename a column you must add the column in one migration + start writing to that column/reading from both, then you must make a migration that backfills all of the data, then switch the code to stop using the old column all together, then finally you can remove the old column).
To help protect against an accidentally long running migration from taking down PyPI, by default a migration will timeout if it is waiting more than 4s to acquire a lock, or if any individual statement takes more than 5s.
The lock timeout helps to protect against the case where a long running app query is blocking the migration, and then the migration itself ends up blocking short running app queries that would otherwise have been able to run concurrently with the long running app query.
The statement timeout helps to protect against locking the database for an extended period of time (often for data migrations).
It is possible to override these values inside of a migration, to do so you can add:
op.execute("SET statement_timeout = 5000") op.execute("SET lock_timeout = 4000")
To your migration.
For more information on what kind of operations are safe in a high availability environment like PyPI, there is related reading available at: