Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
480 views
in Technique[技术] by (71.8m points)

python - Alembic support for multiple Postgres schemas

How can I use Alembic's --autogenerate to migrate multiple Postgres schemas that are not hard-coded in the SQL Alchemy model? (mirror question of SQLAlchemy support of Postgres Schemas, but for Alembic).

In particular, we use Postgres schemas to seperate different clients that share the same set of tables. Moreover, there is a schema with shared stuff among clients. The SQL Alchemy model has no knowledge of schemas, the schema is set at run-time using session.execute("SET search_path TO client1,shared").

The default --autogenerate is not helping at all, as it is detecting multiple schemas that do not exist in the model and ends up deleting the schemas and re-creating every table in the default schema.

I would really like to use --autogenerate though, with the proper plumbing to set the schemas correctly. Any suggestions on if/how Alembic's API can do this?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

from issue 409, the application to tenant-specific schemas of upgrade/downgrade operations can most easily be done using translated schema names, which is also how you would normally be doing the main application as well for multi-tenant.

Go into env.py:

def run_migrations_online():

    connectable = engine_from_config(
        config.get_section(config.config_ini_section),
        prefix='sqlalchemy.',
        poolclass=pool.NullPool)

    with connectable.connect() as connection:
        for tenant_schema_name in all_my_tenant_names:
             conn = connection.execution_options(schema_translate_map={None: tenant_schema_name}

            logger.info("Migrating tenant schema %s" % tenant_schema_name)
            context.configure(
                connection=conn,
                target_metadata=target_metadata
            )

            # to do each tenant in its own transaction.
            # move this up to do all tenants in one giant transaction
            with context.begin_transaction():
                context.run_migrations()

Above will translate the "None" schema name into the given tenant name. If the application shares tenant-based schemas with a default schema that has global tables, then you'd be using some token like "tenant_schema" as the symbol:

for tenant_schema_name in all_my_tenant_names:
     conn = connection.execution_options(schema_translate_map={"tenant_schema": tenant_schema_name}

and in migration files refer to "tenant_schema" where the actual tenant-specific schema name goes:

def upgrade():
    op.alter_column("some_table", "some_column", <migration options>, schema="tenant_schema")

For the "autogenerate" case, the solution @nick-retallack provides has some more of the pieces you would use on that end, namely the use of include_schemas so that autogenerate looks only at a "specimen" schema that represents the latest version of the tenant-specific schema.

In order to set up env.py to use the right system for the right command, the behaviors can be controlled using user-defined options with migration_context.get_x_argument().


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...