linotp.model.migrate module¶
database schema migration hook
- class linotp.model.migrate.MYSQL_Migration(engine: Engine)¶
Bases:
object
MYSQL schema and data migration - converting from latin1 to utf8.
- migrate_data(tables: list) None ¶
Worker for the data migration.
- Parameters
tables – list of tables where the data should be converted to utf8
- migrate_schema() list ¶
Migration worker, to update the schema definition.
mysql ‘show create table’ returns a string which contains as well the used table chareset. In case of a latin1 charset, we convert this table defintion to utf8.
- Returns
list of migrated tables
- class linotp.model.migrate.Migration(engine: Engine)¶
Bases:
object
Migration class.
support the the db migration with a chain of db migration steps where each step is defined as class method according to the requested target version
- db_model_key = 'linotp.sql_data_model_version'¶
- get_current_version() Optional[str] ¶
Get the db model version number.
- Returns
current db version or None
- static is_db_model_current() bool ¶
Check if the db model is current by comparing the db entry.
- static is_db_untouched() bool ¶
Check if the db was just created or has been used already.
When linotp has been run once, it contains the ‘linotp.Config’ entry which is a timestamp about the last config entry change. If the entry does not exist, we can be sure, that the db has not been touched.
- iso8859_to_utf8_conversion() Tuple[bool, str] ¶
Migrate all User (only Username, Surname, Givenname and Email), Config and Token entries from latin1 to utf-8,
but only if the label ‘utf8_conversion’:’suggested’ is set
conditions for this lable are (s.o.): - if the database is not created with this version - is it a mysql database
- Returns
a tuple of bool and detail message
- migrate(from_version: Optional[str] = None, to_version: Optional[str] = None) Optional[str] ¶
Run all migration steps between the versions.
run all steps, which are of ordered list migration_steps
- Parameters
from_version – the version to start in the migration chain
to_version – the target version in the migration chain
- migrate_2_10_1_0() None ¶
Run the migration to blob challenge and data column.
- migrate_2_12_0_0() None ¶
Run the migration for token to add the time stamps.
time stamps are: created, accessed and verified
- migrate_2_9_1_0() None ¶
Run the migration for bigger sized challenge column.
- migrate_3_0_0_0() None ¶
Migrate to linotp3 - to python3+mysql.
The major challenge for the linotp3 migration is the migration from python2+mysql where the mysql driver was using latin1 encoded data, though the database might already support utf8.
Thus we can exclude all fresh created and non-mysql databases
- migrate_3_1_0_0() Tuple[bool, str] ¶
Migrate the encrpyted data to pkcs7 padding.
this requires to 1. decrypt data and extract & unpadd the data with the old format 2. padd and encrypt data with the new padding format
This has to be done for the tokens and for the encrypted config values
- migrate_3_2_0_0()¶
Migrate to 3.2 creates the internal (managed) admin resolver
- migrate_3_2_2_0()¶
Migration to 3.2.2 drops all challenges
- migrate_3_2_3_0()¶
Migration to 3.2.3 - dummy migration step
which is required to trigger debian dbconfig upgrade
- migration_steps = [None, '2.9.1.0', '2.10.1.0', '2.12.0.0', '3.0.0.0', '3.1.0.0', '3.2.0.0', '3.2.2.0', '3.2.3.0']¶
- set_version(version: str) None ¶
Set the new db model version number.
on update: update the entry
on new: create new db entry
- Parameters
version – set the new db model version
- linotp.model.migrate.add_column(engine: Engine, table_name: str, column: Column)¶
Create an index based on the column index definition.
calling the compiled SQL statement:
ALTER TABLE table_name ADD COLUMN column_name column_type
- Parameters
engine – the bound sql database engine
table_name – the name of the table with the column
column – the instantiated column defintion
- Returns
nothing -
- linotp.model.migrate.add_index(engine: Engine, table_name: str, column: Column) None ¶
Create an index based on the column index definition
calling the compiled SQL statement:
CREATE INDEX index_name ON table_name (column_name)
- Parameters
engine – the bound sql database engine
table_name – the name of the table with the column
column – the instantiated column definition
- Returns
nothing -
- linotp.model.migrate.drop_column(engine: Engine, table_name: str, column: Column) None ¶
calling the compiled SQL statement
ALTER TABLE table_name drop COLUMN column
- Parameters
engine – the bound sql database engine
table_name – the name of the table with the column
column – the instantiated column defintion
- Returns
nothing -
- linotp.model.migrate.has_column(engine: Engine, table_name: str, column: Column) bool ¶
Check the column is already in the table.
- Parameters
engine – database engine
table_name – the name of the table with the column
column – the instantiated column defintion
- Returns
boolean
- linotp.model.migrate.re_encode(value: str, from_encoding: str = 'iso-8859-15', to_encoding: str = 'utf-8') str ¶
Reencode a value by default from iso-8859 to utf-8.
Remark: We have only bytes here comming from LinOTP2 stored by python2 and sqlalchemy. The observation is that under certain circumstances the stored data is iso-8859 encoded but sometimes could as well be utf-8.
In python3 this data is now loaded into a str object which is a utf-8 encoded string. A conversion from iso-8859 to utf-8 does not fail as all codepoints of iso-8859 are within utf-8 range. But the result in a bad representation as the codepoints of iso-8859 and utf-8 dont match.
we are using here iso-8859-15 which is a superset of iso-8859-1 which is a superset of ascii
- Parameters
value – str data, might contain iso-8859 data
from_encoding – str data encoding, default ‘iso8859-15’
to_encoding – str data output encoding, default ‘utf-8’
- linotp.model.migrate.run_data_model_migration(engine: Engine) str ¶
- hook for database schema upgrade
called during database initialisation