Compare commits

...

47 Commits
v0.2.0 ... main

Author SHA1 Message Date
Dmitriy Pleshevskiy 7b9f0b3060 chore: remove funding 2022-03-17 16:53:07 +03:00
Dmitriy Pleshevskiy 9d700a3e99 chore: fix clippy warnings 2021-10-22 00:29:09 +03:00
Dmitriy Pleshevskiy c8940bae1c fix: clippy warnings 2021-08-23 10:18:03 +03:00
Dmitriy Pleshevskiy b11c07163f
Single transaction (#14)
* style: use default instead equal with option value

* chore: add non_exhaustive for enums

* chore: fix clippy warnings

* refac!: single transaction by default

BREAKING CHANGES: we remove single transaction flag from cli
2021-07-30 23:25:43 +03:00
Dmitriy Pleshevskiy 97d4755b4d
Create FUNDING.yml 2021-07-06 23:52:14 +03:00
Dmitriy Pleshevskiy 9988943aae chore: bump migra cli version 2021-06-13 01:41:26 +03:00
Dmitriy Pleshevskiy ec02367680
Migra core (#11)
* feat(core): init migra lib
* refac(core): add utils for migration list
* feat(core): add managers
* refac(core): add batch exec trait
* refac(core): smarter managers
* refac(cli): removed adapter, builder
* refac(cli): use migra core for cli
* chore(cli): add dev deps for tests
* chore(cli): improve error handling
* refac(core): make migrations simpler
* refac(cli): change transaction utils
* chore(core): add documentation
2021-06-13 01:39:56 +03:00
Dmitriy Pleshevskiy c144086cb1
Merge pull request #10 from pleshevskiy/sqlite
feat: add sqlite client
2021-05-23 13:44:43 +03:00
Dmitriy Pleshevskiy 1d4f089e77 chore: don't support rusqlite feature name 2021-05-23 13:35:11 +03:00
Dmitriy Pleshevskiy 0633780b84 chore: add sqlite to readme 2021-05-23 13:33:12 +03:00
Dmitriy Pleshevskiy 128047723d chore: remove sqlite database before test 2021-05-23 12:33:14 +03:00
Dmitriy Pleshevskiy 1602069eb5 chore: add transactional ddl 2021-05-23 00:30:43 +03:00
Dmitriy Pleshevskiy c20f3c3411 fix: supports old sqlite version in downgrade 2021-05-17 10:51:53 +03:00
Dmitriy Pleshevskiy 97178fcb02 feat: add sqlite client 2021-05-17 10:06:33 +03:00
Dmitriy Pleshevskiy 3845cd09d6 🎉 release migra-cli 0.5 2021-05-16 17:06:34 +03:00
Dmitriy Pleshevskiy 885abd0871
Merge pull request #9 from pleshevskiy/extend-manifest
Extend migra manifest
2021-05-16 16:04:34 +02:00
Dmitriy Pleshevskiy 7ae88ce3d3 chore: return new fn for migration manager 2021-05-16 17:01:41 +03:00
Dmitriy Pleshevskiy 285d1778b4 style: move migra toml constant 2021-05-16 16:57:20 +03:00
Dmitriy Pleshevskiy def6534fd1 feat: add date format option 2021-05-16 16:55:59 +03:00
Dmitriy Pleshevskiy 11874bd8a4 feat: add migrations config to manifest
- added migrations directory path
- added migrations table name
2021-05-16 16:39:24 +03:00
Dmitriy Pleshevskiy 83a4155d76 refac: add migrations table name to manifest 2021-04-26 12:18:12 +03:00
Dmitriy Pleshevskiy eef7980222 chore: cosmetic changes 2021-04-24 23:16:30 +03:00
Dmitriy Pleshevskiy 25ea001ec4
Merge pull request #7 from pleshevskiy/task-2
feat: single transaction
2021-04-24 21:52:02 +02:00
Dmitriy Pleshevskiy 20e00c3579 chore: fix clippy warnings 2021-04-24 22:48:10 +03:00
Dmitriy Pleshevskiy 7ae5eec2c3 chore: add support transactional ddl for client
I didn't know that mysql doesn't support transactional ddl.
It means that we cannot create table, alter table and etc. in
transaction. At the moment migra supports only postgres client,
that can be use transaction for ddl.
2021-04-24 22:39:44 +03:00
Dmitriy Pleshevskiy f98dd4f0c8 feat: single transaction
I added a single transaction option for apply, upgrade, and
downgrade commands, which wraps all migrations into a single
transaction. This gives you the ability to safely roll up
migrations and, if some unforeseen situation occurs, roll them back.

Unfortunately if there is an error in syntax, mysql will not
rollback the migration and commits automatically :( I will
research this issue.

Closes #2
2021-04-24 01:58:19 +03:00
Dmitriy Pleshevskiy 56f4d190de
Merge pull request #6 from pleshevskiy/task-3
feat: apply multiply files
2021-04-09 22:30:23 +02:00
Dmitriy Pleshevskiy 244a758154 chore: remove dbg 2021-04-09 23:08:16 +03:00
Dmitriy Pleshevskiy 8f06b69f5d feat: apply multiply files
Closes #3
2021-04-09 01:09:51 +03:00
Dmitriy Pleshevskiy 155b2e6aa2
Merge pull request #5 from pleshevskiy/task-3
refac: preparatory work for sebsuquent changes
2021-04-08 19:51:48 +02:00
Dmitriy Pleshevskiy cb1ca43dcb
Merge pull request #4 from pleshevskiy/stmt
chore: move db statements to another trait
2021-04-08 18:04:55 +02:00
Dmitriy Pleshevskiy 07d17c9e93 refac: preparatory work for subsequent changes
Moved the main logic such as running a command or getting a config
into it.

Changed incoming paramenters for commands.
2021-04-08 01:50:51 +03:00
Dmitriy Pleshevskiy e81298d1ba chore: move db statements to another trait 2021-04-08 00:49:38 +03:00
Dmitriy Pleshevskiy a3907c5784 Update issue templates 2021-03-26 11:47:52 +03:00
Dmitriy Pleshevskiy ffa64248c5 Update issue templates 2021-03-26 11:15:02 +03:00
Dmitriy Pleshevskiy a49b9b4ecb chore: bump version 2021-03-26 02:23:31 +03:00
Dmitriy Pleshevskiy 29ce430e00 doc: update installation guide 2021-03-26 02:22:29 +03:00
Dmitriy Pleshevskiy 42169f9f40
Merge pull request #1 from pleshevskiy/mysql
feat: add mysql database supporting
2021-03-26 01:15:29 +02:00
Dmitriy Pleshevskiy 18bf265510 feat: add mysql database supporting 2021-03-26 02:10:41 +03:00
Dmitriy Pleshevskiy c05bac36e7 chore: bump version 2021-03-25 00:42:15 +03:00
Dmitriy Pleshevskiy a29c65a9a7 fix: find exists manifest 2021-03-25 00:41:22 +03:00
Dmitriy Pleshevskiy c8c6765483 chore: fix badge 2021-03-02 00:57:17 +03:00
Dmitriy Pleshevskiy 0d9cd7af71 chore: bump version 2021-03-02 00:54:07 +03:00
Dmitriy Pleshevskiy 62283687a4 chore: update readme 2021-03-02 00:53:56 +03:00
Dmitriy Pleshevskiy 11c374e7b0 feat: add transaction manager
now we change database only in transaction
2021-03-02 00:44:57 +03:00
Dmitriy Pleshevskiy 7c8ff199cc chore: add sample env 2021-03-02 00:44:35 +03:00
Dmitriy Pleshevskiy 9e5c2192d4 feat: add dotenv to load migra config 2021-02-26 01:21:29 +03:00
77 changed files with 2669 additions and 1103 deletions

1
.env.sample Normal file
View File

@ -0,0 +1 @@
DATABASE_URL=postgres://postgres:postgres@localhost:6000/migra_tests

34
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,34 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
<!-- Describe the bug -->
A clear and concise description of what the bug is.
### Steps to reproduce
1. ...
```rust
// Paste a minimal example that causes the problem.
```
### Expected Behavior
<!-- Tell us what should happen. -->
### Actual Behavior
<!-- Tell us what happens instead. -->
```
Paste the full traceback if there was an exception.
```
### Environment
* OS: Linux / Windows / MacOS
* rust version:

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
### Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
### Describe the solution you'd like
A clear and concise description of what you want to happen.
### Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@ -31,6 +31,7 @@ jobs:
with:
path: ~/.cargo/registry
key: ${{ runner.os }}-cargo-registry-${{ hashFiles('Cargo.lock') }}
- name: Cache cargo index
uses: actions/cache@v1
with:
@ -48,7 +49,7 @@ jobs:
uses: actions-rs/cargo@v1
with:
command: test
args: -- --test-threads=1
args: --all-features -- --test-threads=1
clippy:
name: clippy (ubuntu-latest, stable)

3
.gitignore vendored
View File

@ -1,2 +1,5 @@
target/
Cargo.lock
# sqlite databases
*.db

View File

@ -2,5 +2,6 @@
"cSpell.words": [
"migra"
],
"editor.formatOnSave": true
"editor.formatOnSave": true,
"rust.all_features": true
}

View File

@ -1,4 +1,5 @@
[workspace]
members = [
"migra-cli"
"migra",
"migra_cli",
]

View File

@ -13,6 +13,19 @@ Simple SQL migration manager for your project.
cargo install migra-cli
```
If you want to use dotenv for configure migra cli, just run the following in your terminal.
```bash
cargo install migra-cli --features dotenv
```
Each supported database is located in separate features with a similar name.
The default is `postgres`.
For example, if you only want to work with `mysql`, you need to disable `postgres` and enable `mysql`.
```bash
cargo install migra-cli --no-default-features --features mysql
```
### Usage
@ -39,7 +52,11 @@ For more information about the commands, simply run `migra help`
### Supported databases
- [x] Postgres
| Database | Feature | Default |
|----------|--------------|:------------------:|
| Postgres | postgres | :heavy_check_mark: |
| MySQL | mysql | :x: |
| Sqlite | sqlite | :x: |
## License

View File

@ -1,4 +1,4 @@
version: '3'
version: "3"
services:
postgres:
@ -13,6 +13,22 @@ services:
ports:
- 6000:5432
mysql:
image: mysql
container_name: migra.mysql
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: example
MYSQL_DATABASE: "migra_tests"
MYSQL_USER: "mysql"
MYSQL_PASSWORD: "mysql"
volumes:
- mysql_data:/var/lib/mysql
ports:
- 6001:3306
volumes:
postgres_data:
driver: local
mysql_data:
driver: local

View File

@ -1,31 +0,0 @@
use crate::config::Config;
use crate::database::prelude::*;
use crate::database::MigrationManager;
use crate::opts::ApplyCommandOpt;
use crate::StdResult;
use std::convert::TryFrom;
pub(crate) fn apply_sql(config: Config, opts: ApplyCommandOpt) -> StdResult<()> {
let mut manager = MigrationManager::try_from(&config)?;
let file_path = {
let mut file_path = config.directory_path().join(opts.file_name);
if file_path.extension().is_none() {
file_path.set_extension("sql");
}
file_path
};
let content = std::fs::read_to_string(file_path)?;
match manager.apply_sql(&content) {
Ok(_) => {
println!("File was applied successfully");
}
Err(err) => {
println!("{}", err);
}
}
Ok(())
}

View File

@ -1,32 +0,0 @@
use crate::config::Config;
use crate::database::prelude::*;
use crate::database::MigrationManager;
use crate::opts::DowngradeCommandOpt;
use crate::StdResult;
use std::cmp;
use std::convert::TryFrom;
pub(crate) fn rollback_applied_migrations(
config: Config,
opts: DowngradeCommandOpt,
) -> StdResult<()> {
let mut manager = MigrationManager::try_from(&config)?;
let applied_migrations = manager.applied_migration_names()?;
let migrations = config.migrations()?;
let rollback_migrations_number = if opts.all_migrations {
applied_migrations.len()
} else {
cmp::min(opts.migrations_number, applied_migrations.len())
};
for migration_name in &applied_migrations[..rollback_migrations_number] {
if let Some(migration) = migrations.iter().find(|m| m.name() == migration_name) {
println!("downgrade {}...", migration.name());
manager.downgrade(&migration)?;
}
}
Ok(())
}

View File

@ -1,61 +0,0 @@
use crate::config::Config;
use crate::database::migration::filter_pending_migrations;
use crate::database::prelude::*;
use crate::database::{DatabaseConnectionManager, Migration, MigrationManager};
use crate::error::{Error, StdResult};
const EM_DASH: char = '—';
pub(crate) fn print_migration_lists(config: Config) -> StdResult<()> {
let applied_migration_names = match config.database.connection_string() {
Ok(ref database_connection_string) => {
let connection_manager = DatabaseConnectionManager::new(&config.database);
let conn = connection_manager.connect_with_string(database_connection_string)?;
let mut manager = MigrationManager::new(conn);
let applied_migration_names = manager.applied_migration_names()?;
show_applied_migrations(&applied_migration_names);
applied_migration_names
}
Err(e) if e == Error::MissedEnvVar(String::new()) => {
eprintln!("WARNING: {}", e);
eprintln!("WARNING: No connection to database");
Vec::new()
}
Err(e) => panic!("{}", e),
};
println!();
let pending_migrations =
filter_pending_migrations(config.migrations()?, &applied_migration_names);
show_pending_migrations(&pending_migrations);
Ok(())
}
fn show_applied_migrations(applied_migration_names: &[String]) {
println!("Applied migrations:");
if applied_migration_names.is_empty() {
println!("{}", EM_DASH);
} else {
applied_migration_names
.iter()
.rev()
.for_each(|name| println!("{}", name));
}
}
fn show_pending_migrations(pending_migrations: &[Migration]) {
println!("Pending migrations:");
if pending_migrations.is_empty() {
println!("{}", EM_DASH);
} else {
pending_migrations.iter().for_each(|m| {
println!("{}", m.name());
});
}
}

View File

@ -1,45 +0,0 @@
use crate::database::migration::*;
use crate::opts::UpgradeCommandOpt;
use crate::Config;
use crate::StdResult;
use std::convert::TryFrom;
pub(crate) fn upgrade_pending_migrations(config: Config, opts: UpgradeCommandOpt) -> StdResult<()> {
let mut manager = MigrationManager::try_from(&config)?;
let applied_migration_names = manager.applied_migration_names()?;
let migrations = config.migrations()?;
let pending_migrations = filter_pending_migrations(migrations, &applied_migration_names);
if pending_migrations.is_empty() {
println!("Up to date");
} else if let Some(migration_name) = opts.migration_name {
let target_migration = pending_migrations
.iter()
.find(|m| m.name() == &migration_name);
match target_migration {
Some(migration) => {
print_migration_info(migration);
manager.upgrade(migration)?;
}
None => {
eprintln!(r#"Cannot find migration with "{}" name"#, migration_name);
}
}
} else {
let upgrade_migrations_number = opts
.migrations_number
.unwrap_or_else(|| pending_migrations.len());
for migration in &pending_migrations[..upgrade_migrations_number] {
print_migration_info(migration);
manager.upgrade(migration)?;
}
}
Ok(())
}
fn print_migration_info(migration: &Migration) {
println!("upgrade {}...", migration.name());
}

View File

@ -1,138 +0,0 @@
use crate::database::migration::Migration;
use crate::error::{Error, MigraResult};
use serde::{Deserialize, Serialize};
use std::path::{Path, PathBuf};
use std::{env, fs, io};
pub(crate) const MIGRA_TOML_FILENAME: &str = "Migra.toml";
pub(crate) const DEFAULT_DATABASE_CONNECTION_ENV: &str = "$DATABASE_URL";
#[derive(Debug, Serialize, Deserialize)]
pub(crate) struct Config {
#[serde(skip)]
manifest_root: PathBuf,
root: PathBuf,
#[serde(default)]
pub database: DatabaseConfig,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) enum SupportedDatabaseClient {
Postgres,
}
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub(crate) struct DatabaseConfig {
pub client: Option<SupportedDatabaseClient>,
pub connection: Option<String>,
}
impl DatabaseConfig {
pub fn client(&self) -> MigraResult<SupportedDatabaseClient> {
Ok(SupportedDatabaseClient::Postgres)
}
pub fn connection_string(&self) -> MigraResult<String> {
let connection = self
.connection
.clone()
.unwrap_or_else(|| String::from(DEFAULT_DATABASE_CONNECTION_ENV));
if let Some(connection_env) = connection.strip_prefix("$") {
env::var(connection_env).map_err(|_| Error::MissedEnvVar(connection_env.to_string()))
} else {
Ok(connection)
}
}
}
impl Default for Config {
fn default() -> Config {
Config {
manifest_root: PathBuf::default(),
root: PathBuf::from("database"),
database: DatabaseConfig {
connection: Some(String::from(DEFAULT_DATABASE_CONNECTION_ENV)),
..Default::default()
},
}
}
}
fn search_for_directory_containing_file(path: &Path, file_name: &str) -> MigraResult<PathBuf> {
let file_path = path.join(file_name);
if file_path.is_file() {
Ok(path.to_owned())
} else {
path.parent()
.ok_or(Error::RootNotFound)
.and_then(|p| search_for_directory_containing_file(p, file_name))
}
}
fn recursive_find_project_root() -> MigraResult<PathBuf> {
let current_dir = std::env::current_dir()?;
search_for_directory_containing_file(&current_dir, MIGRA_TOML_FILENAME)
}
impl Config {
pub fn read(config_path: Option<PathBuf>) -> MigraResult<Config> {
let config_path = match config_path {
Some(mut config_path) if config_path.is_dir() => {
config_path.push(MIGRA_TOML_FILENAME);
Some(config_path)
}
Some(config_path) => Some(config_path),
None => recursive_find_project_root().ok(),
};
match config_path {
None => Ok(Config::default()),
Some(config_path) => {
let content = fs::read_to_string(&config_path)?;
let mut config: Config = toml::from_str(&content).expect("Cannot parse Migra.toml");
config.manifest_root = config_path
.parent()
.unwrap_or_else(|| Path::new(""))
.to_path_buf();
Ok(config)
}
}
}
}
impl Config {
pub fn directory_path(&self) -> PathBuf {
self.manifest_root.join(&self.root)
}
pub fn migration_dir_path(&self) -> PathBuf {
self.directory_path().join("migrations")
}
pub fn migrations(&self) -> MigraResult<Vec<Migration>> {
let mut entries = match self.migration_dir_path().read_dir() {
Err(e) if e.kind() == io::ErrorKind::NotFound => return Ok(Vec::new()),
entries => entries?
.map(|res| res.map(|e| e.path()))
.collect::<Result<Vec<_>, io::Error>>()?,
};
if entries.is_empty() {
return Ok(vec![]);
}
entries.sort();
let migrations = entries
.iter()
.filter_map(|path| Migration::new(&path))
.collect::<Vec<_>>();
Ok(migrations)
}
}

View File

@ -1,17 +0,0 @@
use crate::error::StdResult;
pub trait ToSql {
fn to_sql(&self) -> String;
}
pub type ToSqlParams<'a> = &'a [&'a dyn ToSql];
impl ToSql for &str {
fn to_sql(&self) -> String {
format!("'{}'", self)
}
}
pub trait TryFromSql<QueryResultRow>: Sized {
fn try_from_sql(row: QueryResultRow) -> StdResult<Self>;
}

View File

@ -1,39 +0,0 @@
use super::prelude::*;
pub(crate) fn merge_query_with_params(query: &str, params: ToSqlParams) -> String {
params
.iter()
.enumerate()
.fold(query.to_string(), |acc, (i, p)| {
str::replace(&acc, &format!("${}", i + 1), &p.to_sql())
})
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn replace_one_param_in_query() {
assert_eq!(
merge_query_with_params("SELECT $1", &[&"foo"]),
"SELECT 'foo'"
);
}
#[test]
fn replace_two_params_in_query() {
assert_eq!(
merge_query_with_params("SELECT $1, $2", &[&"foo", &"bar"]),
"SELECT 'foo', 'bar'"
);
}
#[test]
fn replace_all_bonds_in_query_with_first_param() {
assert_eq!(
merge_query_with_params("SELECT $1, $1", &[&"foo"]),
"SELECT 'foo', 'foo'"
);
}
}

View File

@ -1,3 +0,0 @@
mod postgres;
pub use self::postgres::*;

View File

@ -1,45 +0,0 @@
use crate::database::builder::merge_query_with_params;
use crate::database::prelude::*;
use crate::error::StdResult;
use postgres::{Client, NoTls};
pub struct PostgresConnection {
client: Client,
}
impl OpenDatabaseConnection for PostgresConnection {
fn open(connection_string: &str) -> StdResult<Self> {
let client = Client::connect(connection_string, NoTls)?;
Ok(PostgresConnection { client })
}
}
impl DatabaseConnection for PostgresConnection {
fn batch_execute(&mut self, query: &str) -> StdResult<()> {
self.client.batch_execute(query)?;
Ok(())
}
fn execute<'b>(&mut self, query: &str, params: ToSqlParams<'b>) -> StdResult<u64> {
let stmt = merge_query_with_params(query, params);
let res = self.client.execute(stmt.as_str(), &[])?;
Ok(res)
}
fn query<'b>(&mut self, query: &str, params: ToSqlParams<'b>) -> StdResult<Vec<Vec<String>>> {
let stmt = merge_query_with_params(query, params);
let res = self.client.query(stmt.as_str(), &[])?;
let res = res
.into_iter()
.map(|row| {
let column: String = row.get(0);
vec![column]
})
.collect::<Vec<_>>();
Ok(res)
}
}

View File

@ -1,44 +0,0 @@
use super::adapter::ToSqlParams;
use super::clients::*;
use crate::config::{DatabaseConfig, SupportedDatabaseClient};
use crate::error::StdResult;
pub trait OpenDatabaseConnection: Sized {
fn open(connection_string: &str) -> StdResult<Self>;
}
pub trait DatabaseConnection {
fn batch_execute(&mut self, query: &str) -> StdResult<()>;
fn execute<'b>(&mut self, query: &str, params: ToSqlParams<'b>) -> StdResult<u64>;
fn query<'b>(&mut self, query: &str, params: ToSqlParams<'b>) -> StdResult<Vec<Vec<String>>>;
}
pub(crate) struct DatabaseConnectionManager {
config: DatabaseConfig,
}
impl DatabaseConnectionManager {
pub fn new(config: &DatabaseConfig) -> Self {
Self {
config: config.clone(),
}
}
pub fn connect_with_string(
&self,
connection_string: &str,
) -> StdResult<Box<dyn DatabaseConnection>> {
let conn = match self.config.client()? {
SupportedDatabaseClient::Postgres => PostgresConnection::open(&connection_string)?,
};
Ok(Box::new(conn))
}
pub fn connect(&self) -> StdResult<Box<dyn DatabaseConnection>> {
let connection_string = self.config.connection_string()?;
self.connect_with_string(&connection_string)
}
}

View File

@ -1,164 +0,0 @@
use super::connection::{DatabaseConnection, DatabaseConnectionManager};
use crate::config::Config;
use crate::StdResult;
use std::convert::TryFrom;
use std::fs;
use std::path::{Path, PathBuf};
#[derive(Debug)]
pub struct Migration {
upgrade_sql_file_path: PathBuf,
downgrade_sql_file_path: PathBuf,
name: String,
}
impl Migration {
pub(crate) fn new(directory: &Path) -> Option<Migration> {
if directory.is_dir() {
let name = directory
.file_name()
.and_then(|name| name.to_str())
.unwrap_or_default();
let upgrade_sql_file_path = directory.join("up.sql");
let downgrade_sql_file_path = directory.join("down.sql");
if upgrade_sql_file_path.exists() && downgrade_sql_file_path.exists() {
return Some(Migration {
upgrade_sql_file_path,
downgrade_sql_file_path,
name: String::from(name),
});
}
}
None
}
}
impl Migration {
pub fn name(&self) -> &String {
&self.name
}
fn upgrade_sql_content(&self) -> StdResult<String> {
let content = fs::read_to_string(&self.upgrade_sql_file_path)?;
Ok(content)
}
fn downgrade_sql_content(&self) -> StdResult<String> {
let content = fs::read_to_string(&self.downgrade_sql_file_path)?;
Ok(content)
}
}
pub struct MigrationManager {
pub(crate) conn: Box<dyn DatabaseConnection>,
}
impl MigrationManager {
pub fn new(conn: Box<dyn DatabaseConnection>) -> Self {
MigrationManager { conn }
}
}
impl TryFrom<&Config> for MigrationManager {
type Error = Box<dyn std::error::Error>;
fn try_from(config: &Config) -> Result<Self, Self::Error> {
let connection_manager = DatabaseConnectionManager::new(&config.database);
let conn = connection_manager.connect()?;
Ok(Self { conn })
}
}
pub fn is_migrations_table_not_found<D: std::fmt::Display>(error: D) -> bool {
error
.to_string()
.contains(r#"relation "migrations" does not exist"#)
}
pub trait DatabaseMigrationManager {
fn apply_sql(&mut self, sql_content: &str) -> StdResult<()>;
fn create_migrations_table(&mut self) -> StdResult<()>;
fn insert_migration_info(&mut self, name: &str) -> StdResult<u64>;
fn delete_migration_info(&mut self, name: &str) -> StdResult<u64>;
fn applied_migration_names(&mut self) -> StdResult<Vec<String>>;
fn upgrade(&mut self, migration: &Migration) -> StdResult<()> {
let content = migration.upgrade_sql_content()?;
self.create_migrations_table()?;
self.apply_sql(&content)?;
self.insert_migration_info(migration.name())?;
Ok(())
}
fn downgrade(&mut self, migration: &Migration) -> StdResult<()> {
let content = migration.downgrade_sql_content()?;
self.apply_sql(&content)?;
self.delete_migration_info(migration.name())?;
Ok(())
}
}
impl DatabaseMigrationManager for MigrationManager {
fn apply_sql(&mut self, sql_content: &str) -> StdResult<()> {
self.conn.batch_execute(sql_content)
}
fn create_migrations_table(&mut self) -> StdResult<()> {
self.conn.batch_execute(
r#"CREATE TABLE IF NOT EXISTS migrations (
id serial PRIMARY KEY,
name text NOT NULL UNIQUE
)"#,
)
}
fn insert_migration_info(&mut self, name: &str) -> StdResult<u64> {
self.conn
.execute("INSERT INTO migrations (name) VALUES ($1)", &[&name])
}
fn delete_migration_info(&mut self, name: &str) -> StdResult<u64> {
self.conn
.execute("DELETE FROM migrations WHERE name = $1", &[&name])
}
fn applied_migration_names(&mut self) -> StdResult<Vec<String>> {
let res = self
.conn
.query("SELECT name FROM migrations ORDER BY id DESC", &[])
.or_else(|e| {
if is_migrations_table_not_found(&e) {
Ok(Vec::new())
} else {
Err(e)
}
})?;
let applied_migration_names: Vec<String> = res
.into_iter()
.filter_map(|row| row.first().cloned())
.collect();
Ok(applied_migration_names)
}
}
pub fn filter_pending_migrations(
migrations: Vec<Migration>,
applied_migration_names: &[String],
) -> Vec<Migration> {
migrations
.into_iter()
.filter(|m| !applied_migration_names.contains(m.name()))
.collect()
}

View File

@ -1,14 +0,0 @@
pub(crate) mod adapter;
pub(crate) mod builder;
pub(crate) mod clients;
pub(crate) mod connection;
pub(crate) mod migration;
pub mod prelude {
pub use super::adapter::{ToSql, ToSqlParams, TryFromSql};
pub use super::connection::{DatabaseConnection, OpenDatabaseConnection};
pub use super::migration::DatabaseMigrationManager;
}
pub(crate) use connection::DatabaseConnectionManager;
pub(crate) use migration::{Migration, MigrationManager};

View File

@ -1,52 +0,0 @@
#![deny(clippy::all)]
#![forbid(unsafe_code)]
mod commands;
mod config;
mod database;
mod error;
mod opts;
use crate::error::StdResult;
use config::Config;
use opts::{AppOpt, Command, StructOpt};
use std::io;
fn main() -> StdResult<()> {
let opt = AppOpt::from_args();
match opt.command {
Command::Init => {
commands::initialize_migra_manifest(opt.config)?;
}
Command::Apply(opts) => {
let config = Config::read(opt.config)?;
commands::apply_sql(config, opts)?;
}
Command::Make(opts) => {
let config = Config::read(opt.config)?;
commands::make_migration(config, opts)?;
}
Command::List => {
let config = Config::read(opt.config)?;
commands::print_migration_lists(config)?;
}
Command::Upgrade(opts) => {
let config = Config::read(opt.config)?;
commands::upgrade_pending_migrations(config, opts)?;
}
Command::Downgrade(opts) => {
let config = Config::read(opt.config)?;
commands::rollback_applied_migrations(config, opts)?;
}
Command::Completions(opts) => {
AppOpt::clap().gen_completions_to(
env!("CARGO_BIN_NAME"),
opts.into(),
&mut io::stdout(),
);
}
}
Ok(())
}

View File

@ -1,374 +0,0 @@
pub use assert_cmd::prelude::*;
pub use predicates::str::contains;
pub use std::process::Command;
pub type TestResult = std::result::Result<(), Box<dyn std::error::Error>>;
pub const ROOT_PATH: &str = concat!(env!("CARGO_MANIFEST_DIR"), "/tests/data/");
pub fn path_to_file(file_name: &'static str) -> String {
ROOT_PATH.to_owned() + file_name
}
pub const DATABASE_URL_DEFAULT_ENV_NAME: &str = "DATABASE_URL";
pub const DATABASE_URL_ENV_VALUE: &str = "postgres://postgres:postgres@localhost:6000/migra_tests";
pub struct Env {
key: &'static str,
}
impl Env {
pub fn new(key: &'static str, value: &'static str) -> Self {
std::env::set_var(key, value);
Env { key }
}
}
impl Drop for Env {
fn drop(&mut self) {
std::env::remove_var(self.key);
}
}
mod init {
use super::*;
use std::fs;
#[test]
fn init_manifest_with_default_config() -> TestResult {
let manifest_path = "Migra.toml";
Command::cargo_bin("migra")?
.arg("init")
.assert()
.success()
.stdout(contains(format!("Created {}", &manifest_path)));
let content = fs::read_to_string(&manifest_path)?;
assert_eq!(
content,
r#"root = "database"
[database]
connection = "$DATABASE_URL"
"#
);
fs::remove_file(&manifest_path)?;
Ok(())
}
#[test]
fn init_manifest_in_custom_path() -> TestResult {
let manifest_path = path_to_file("Migra.toml");
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("init")
.assert()
.success()
.stdout(contains(format!("Created {}", manifest_path.as_str())));
let content = fs::read_to_string(&manifest_path)?;
assert_eq!(
content,
r#"root = "database"
[database]
connection = "$DATABASE_URL"
"#
);
fs::remove_file(&manifest_path)?;
Ok(())
}
}
mod list {
use super::*;
#[test]
fn empty_migration_list() -> TestResult {
Command::cargo_bin("migra")?
.arg("ls")
.assert()
.success()
.stderr(contains(
r#"WARNING: Missed "DATABASE_URL" environment variable
WARNING: No connection to database"#,
))
.stdout(contains(
r#"
Pending migrations:
"#,
));
Ok(())
}
#[test]
fn empty_migration_list_with_db() -> TestResult {
let env = Env::new(DATABASE_URL_DEFAULT_ENV_NAME, DATABASE_URL_ENV_VALUE);
Command::cargo_bin("migra")?
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
Pending migrations:
"#,
));
drop(env);
Ok(())
}
#[test]
fn empty_migration_list_with_url_in_manifest() -> TestResult {
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_url_empty.toml"))
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
Pending migrations:
"#,
));
Ok(())
}
#[test]
fn empty_migration_list_with_env_in_manifest() -> TestResult {
let env = Env::new("DB_URL", DATABASE_URL_ENV_VALUE);
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env_empty.toml"))
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
Pending migrations:
"#,
));
drop(env);
Ok(())
}
#[test]
fn empty_applied_migrations() -> TestResult {
let env = Env::new(DATABASE_URL_DEFAULT_ENV_NAME, DATABASE_URL_ENV_VALUE);
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env.toml"))
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
Pending migrations:
210218232851_create_articles
210218233414_create_persons
"#,
));
drop(env);
Ok(())
}
#[test]
fn applied_all_migrations() -> TestResult {
let env = Env::new(DATABASE_URL_DEFAULT_ENV_NAME, DATABASE_URL_ENV_VALUE);
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env.toml"))
.arg("up")
.assert()
.success();
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env.toml"))
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
210218232851_create_articles
210218233414_create_persons
Pending migrations:
"#,
));
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env.toml"))
.arg("down")
.arg("--all")
.assert()
.success();
drop(env);
Ok(())
}
#[test]
fn applied_one_migrations() -> TestResult {
let env = Env::new(DATABASE_URL_DEFAULT_ENV_NAME, DATABASE_URL_ENV_VALUE);
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env.toml"))
.arg("up")
.arg("-n")
.arg("1")
.assert()
.success();
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env.toml"))
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
210218232851_create_articles
Pending migrations:
210218233414_create_persons
"#,
));
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env.toml"))
.arg("down")
.assert()
.success();
drop(env);
Ok(())
}
}
mod make {
use super::*;
use std::fs;
#[test]
fn make_migration_directory() -> TestResult {
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_url.toml"))
.arg("make")
.arg("test")
.assert()
.success()
.stdout(contains("Structure for migration has been created in"));
let entries = fs::read_dir(path_to_file("migrations"))?
.map(|entry| entry.map(|e| e.path()))
.collect::<Result<Vec<_>, std::io::Error>>()?;
let dir_paths = entries
.iter()
.filter_map(|path| {
path.to_str().and_then(|path| {
if path.ends_with("_test") {
Some(path)
} else {
None
}
})
})
.collect::<Vec<_>>();
for dir_path in dir_paths.iter() {
let upgrade_content = fs::read_to_string(format!("{}/up.sql", dir_path))?;
let downgrade_content = fs::read_to_string(format!("{}/down.sql", dir_path))?;
assert_eq!(upgrade_content, "-- Your SQL goes here\n\n");
assert_eq!(
downgrade_content,
"-- This file should undo anything in `up.sql`\n\n"
);
fs::remove_dir_all(dir_path)?;
}
Ok(())
}
}
mod upgrade {
use super::*;
#[test]
fn applied_all_migrations() -> TestResult {
let env = Env::new(DATABASE_URL_DEFAULT_ENV_NAME, DATABASE_URL_ENV_VALUE);
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env.toml"))
.arg("up")
.assert()
.success();
let mut conn = postgres::Client::connect(DATABASE_URL_ENV_VALUE, postgres::NoTls)?;
let res = conn.query("SELECT p.id, a.id FROM persons AS p, articles AS a", &[])?;
assert_eq!(
res.into_iter()
.map(|row| (row.get(0), row.get(1)))
.collect::<Vec<(i32, i32)>>(),
Vec::new()
);
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env.toml"))
.arg("down")
.assert()
.success();
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env.toml"))
.arg("down")
.assert()
.success();
drop(env);
Ok(())
}
}

23
migra/Cargo.toml Normal file
View File

@ -0,0 +1,23 @@
[package]
name = "migra"
version = "1.0.0"
authors = ["Dmitriy Pleshevskiy <dmitriy@ideascup.me>"]
edition = "2018"
description = "Migra is a simple library for managing SQL in your application"
homepage = "https://github.com/pleshevskiy/migra"
repository = "https://github.com/pleshevskiy/migra"
license = "MIT OR Apache-2.0"
keywords = ["migration", "sql", "manager"]
categories = ["accessibility", "database"]
readme = "README.md"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[features]
default = ["postgres"]
sqlite = ["rusqlite"]
[dependencies]
postgres = { version = "0.19", optional = true }
mysql = { version = "20.1", optional = true }
rusqlite = { version = "0.25", optional = true }

86
migra/README.md Normal file
View File

@ -0,0 +1,86 @@
# Migra
[![CI](https://github.com/pleshevskiy/migra/actions/workflows/rust.yml/badge.svg?branch=main)](https://github.com/pleshevskiy/migra/actions/workflows/rust.yml)
[![unsafe forbidden](https://img.shields.io/badge/unsafe-forbidden-success.svg)](https://github.com/rust-secure-code/safety-dance/)
[![Crates.io](https://img.shields.io/crates/v/migra)](https://crates.io/crates/migra)
![Crates.io](https://img.shields.io/crates/l/migra)
Migra is a simple library for managing SQL in your application.
For example, if you have a task list application, you can update the local user database from version to version.
This is main crate for [migra-cli](https://crates.io/crates/migra-cli), which allows you to manege SQL for web
servers in any program language without being bound to SQL frameworks.
### Installation
Add `migra = { version = "1.0" }` as a dependency in `Cargo.toml`.
This crate has not required predefined database clients in features with similar name.
If you want to add them, just install crate with additional features (`postgres`, `mysql`, `sqlite`).
`Cargo.toml` example:
```toml
[package]
name = "my-crate"
version = "0.1.0"
authors = ["Me <user@rust-lang.org>"]
[dependencies]
migra = { version = "1.0", features = ["postgres"] }
```
## Basic usage
**Note:** This example requires to enable `sqlite` feature.
```rust
use migra::clients::{OpenDatabaseConnection, SqliteClient};
use migra::managers::{ManageTransaction, ManageMigrations};
fn main() -> migra::Result<()> {
let mut client = SqliteClient::new("./tasks.db")?;
client.create_migrations_table()?;
let mut migrations = client.get_applied_migrations()?;
client
.begin_transaction()
.and_then(|_| {
migrations.should_run_upgrade_migration(
&mut client,
"20210615_initial_migration",
r#"CREATE TABLE IF NOT EXISTS tasks (
title TEXT NOT NULL
);"#,
)?;
Ok(())
})
.and_then(|res| client.commit_transaction().and(Ok(res)))
.or_else(|err| client.rollback_transaction().and(Err(err)));
Ok(())
}
```
### Supported databases
| Database | Feature |
|----------|--------------|
| Postgres | postgres |
| MySQL | mysql |
| Sqlite | sqlite |
## License
Licensed under either of these:
* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE_APACHE) or
https://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE_MIT) or
https://opensource.org/licenses/MIT)

39
migra/src/clients/mod.rs Normal file
View File

@ -0,0 +1,39 @@
use crate::errors::MigraResult;
use crate::managers::{ManageMigrations, ManageTransaction};
/// A trait that helps to open a connection to a specific database client.
pub trait OpenDatabaseConnection
where
Self: Sized,
{
/// Open database connection with predefined migrations table name.
fn new(connection_string: &str) -> MigraResult<Self> {
Self::manual(connection_string, "migrations")
}
/// Open database connection manually with additional migration table name parameter.
fn manual(connection_string: &str, migrations_table_name: &str) -> MigraResult<Self>;
}
/// All client implementations that have migration and transaction manager implementations
/// are considered clients.
pub trait Client: ManageMigrations + ManageTransaction {}
/// If you have complex application mechanics that allow users to choose which
/// database they can use, then you will most likely need this helper for that.
pub type AnyClient = Box<(dyn Client + 'static)>;
#[cfg(feature = "postgres")]
mod postgres;
#[cfg(feature = "postgres")]
pub use self::postgres::Client as PostgresClient;
#[cfg(feature = "mysql")]
mod mysql;
#[cfg(feature = "mysql")]
pub use self::mysql::Client as MysqlClient;
#[cfg(feature = "sqlite")]
mod sqlite;
#[cfg(feature = "sqlite")]
pub use self::sqlite::Client as SqliteClient;

View File

@ -0,0 +1,94 @@
use super::OpenDatabaseConnection;
use crate::errors::{DbKind, Error, MigraResult, StdResult};
use crate::managers::{BatchExecute, ManageMigrations, ManageTransaction};
use crate::migration;
use mysql::prelude::*;
use mysql::{Pool, PooledConn};
/// Predefined `MySQL` client.
///
/// **Note:** Requires enabling `mysql` feature.
#[derive(Debug)]
pub struct Client {
conn: PooledConn,
migrations_table_name: String,
}
impl Client {
/// Provide access to the original database connection.
#[must_use]
pub fn conn(&self) -> &PooledConn {
&self.conn
}
}
impl OpenDatabaseConnection for Client {
fn manual(connection_string: &str, migrations_table_name: &str) -> MigraResult<Self> {
let conn = Pool::new_manual(1, 1, connection_string)
.and_then(|pool| pool.get_conn())
.map_err(|err| Error::db(err.into(), DbKind::DatabaseConnection))?;
Ok(Client {
conn,
migrations_table_name: migrations_table_name.to_owned(),
})
}
}
impl BatchExecute for Client {
fn batch_execute(&mut self, sql: &str) -> StdResult<()> {
self.conn.query_drop(sql).map_err(From::from)
}
}
impl ManageTransaction for Client {}
impl ManageMigrations for Client {
fn create_migrations_table(&mut self) -> MigraResult<()> {
let stmt = format!(
r#"CREATE TABLE IF NOT EXISTS {} (
id int AUTO_INCREMENT PRIMARY KEY,
name varchar(256) NOT NULL UNIQUE
)"#,
&self.migrations_table_name
);
self.batch_execute(&stmt)
.map_err(|err| Error::db(err, DbKind::CreateMigrationsTable))
}
fn insert_migration(&mut self, name: &str) -> MigraResult<u64> {
let stmt = format!(
"INSERT INTO {} (name) VALUES (?)",
&self.migrations_table_name
);
self.conn
.exec_first(&stmt, (name,))
.map(Option::unwrap_or_default)
.map_err(|err| Error::db(err.into(), DbKind::InsertMigration))
}
fn delete_migration(&mut self, name: &str) -> MigraResult<u64> {
let stmt = format!("DELETE FROM {} WHERE name = ?", &self.migrations_table_name);
self.conn
.exec_first(&stmt, (name,))
.map(Option::unwrap_or_default)
.map_err(|err| Error::db(err.into(), DbKind::DeleteMigration))
}
fn get_applied_migrations(&mut self) -> MigraResult<migration::List> {
let stmt = format!(
"SELECT name FROM {} ORDER BY id DESC",
&self.migrations_table_name
);
self.conn
.query::<String, _>(stmt)
.map(From::from)
.map_err(|err| Error::db(err.into(), DbKind::GetAppliedMigrations))
}
}
impl super::Client for Client {}

View File

@ -0,0 +1,105 @@
use super::OpenDatabaseConnection;
use crate::errors::{DbKind, Error, MigraResult, StdResult};
use crate::managers::{BatchExecute, ManageMigrations, ManageTransaction};
use crate::migration;
use postgres::{Client as PostgresClient, NoTls};
use std::fmt;
/// Predefined `Postgres` client.
///
/// **Note:** Requires enabling `postgres` feature.
pub struct Client {
conn: PostgresClient,
migrations_table_name: String,
}
impl Client {
/// Provide access to the original database connection.
#[must_use]
pub fn conn(&self) -> &PostgresClient {
&self.conn
}
}
impl fmt::Debug for Client {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt.debug_struct("Client")
.field("migrations_table_name", &self.migrations_table_name)
.finish()
}
}
impl OpenDatabaseConnection for Client {
fn manual(connection_string: &str, migrations_table_name: &str) -> MigraResult<Self> {
let conn = PostgresClient::connect(connection_string, NoTls)
.map_err(|err| Error::db(err.into(), DbKind::DatabaseConnection))?;
Ok(Client {
conn,
migrations_table_name: migrations_table_name.to_owned(),
})
}
}
impl BatchExecute for Client {
fn batch_execute(&mut self, sql: &str) -> StdResult<()> {
self.conn.batch_execute(sql).map_err(From::from)
}
}
impl ManageTransaction for Client {}
impl ManageMigrations for Client {
fn create_migrations_table(&mut self) -> MigraResult<()> {
let stmt = format!(
r#"CREATE TABLE IF NOT EXISTS {} (
id serial PRIMARY KEY,
name text NOT NULL UNIQUE
)"#,
&self.migrations_table_name
);
self.batch_execute(&stmt)
.map_err(|err| Error::db(err, DbKind::CreateMigrationsTable))
}
fn insert_migration(&mut self, name: &str) -> MigraResult<u64> {
let stmt = format!(
"INSERT INTO {} (name) VALUES ($1)",
&self.migrations_table_name
);
self.conn
.execute(stmt.as_str(), &[&name])
.map_err(|err| Error::db(err.into(), DbKind::InsertMigration))
}
fn delete_migration(&mut self, name: &str) -> MigraResult<u64> {
let stmt = format!(
"DELETE FROM {} WHERE name = $1",
&self.migrations_table_name
);
self.conn
.execute(stmt.as_str(), &[&name])
.map_err(|err| Error::db(err.into(), DbKind::DeleteMigration))
}
fn get_applied_migrations(&mut self) -> MigraResult<migration::List> {
let stmt = format!(
"SELECT name FROM {} ORDER BY id DESC",
&self.migrations_table_name
);
self.conn
.query(stmt.as_str(), &[])
.and_then(|res| {
res.into_iter()
.map(|row| row.try_get(0))
.collect::<Result<Vec<String>, _>>()
})
.map(From::from)
.map_err(|err| Error::db(err.into(), DbKind::GetAppliedMigrations))
}
}
impl super::Client for Client {}

103
migra/src/clients/sqlite.rs Normal file
View File

@ -0,0 +1,103 @@
use super::OpenDatabaseConnection;
use crate::errors::{DbKind, Error, MigraResult, StdResult};
use crate::managers::{BatchExecute, ManageMigrations, ManageTransaction};
use crate::migration;
use rusqlite::Connection;
/// Predefined `Sqlite` client.
///
/// **Note:** Requires enabling `sqlite` feature.
#[derive(Debug)]
pub struct Client {
conn: Connection,
migrations_table_name: String,
}
impl Client {
/// Provide access to the original database connection.
#[must_use]
pub fn conn(&self) -> &Connection {
&self.conn
}
}
impl OpenDatabaseConnection for Client {
fn manual(connection_string: &str, migrations_table_name: &str) -> MigraResult<Self> {
let conn = if connection_string == ":memory:" {
Connection::open_in_memory()
} else {
Connection::open(connection_string)
}
.map_err(|err| Error::db(err.into(), DbKind::DatabaseConnection))?;
Ok(Client {
conn,
migrations_table_name: migrations_table_name.to_owned(),
})
}
}
impl BatchExecute for Client {
fn batch_execute(&mut self, sql: &str) -> StdResult<()> {
self.conn.execute_batch(sql).map_err(From::from)
}
}
impl ManageTransaction for Client {}
impl ManageMigrations for Client {
fn create_migrations_table(&mut self) -> MigraResult<()> {
let stmt = format!(
r#"CREATE TABLE IF NOT EXISTS {} (
id int AUTO_INCREMENT PRIMARY KEY,
name varchar(256) NOT NULL UNIQUE
)"#,
&self.migrations_table_name
);
self.batch_execute(&stmt)
.map_err(|err| Error::db(err, DbKind::CreateMigrationsTable))
}
fn insert_migration(&mut self, name: &str) -> MigraResult<u64> {
let stmt = format!(
"INSERT INTO {} (name) VALUES ($1)",
&self.migrations_table_name
);
self.conn
.execute(&stmt, [name])
.map(|res| res as u64)
.map_err(|err| Error::db(err.into(), DbKind::InsertMigration))
}
fn delete_migration(&mut self, name: &str) -> MigraResult<u64> {
let stmt = format!(
"DELETE FROM {} WHERE name = $1",
&self.migrations_table_name
);
self.conn
.execute(&stmt, [name])
.map(|res| res as u64)
.map_err(|err| Error::db(err.into(), DbKind::DeleteMigration))
}
fn get_applied_migrations(&mut self) -> MigraResult<migration::List> {
let stmt = format!(
"SELECT name FROM {} ORDER BY id DESC",
&self.migrations_table_name
);
self.conn
.prepare(&stmt)
.and_then(|mut stmt| {
stmt.query_map([], |row| row.get(0))?
.collect::<Result<Vec<String>, _>>()
})
.map(From::from)
.map_err(|err| Error::db(err.into(), DbKind::GetAppliedMigrations))
}
}
impl super::Client for Client {}

129
migra/src/errors.rs Normal file
View File

@ -0,0 +1,129 @@
use std::fmt;
use std::io;
/// A helper type for any standard error.
pub type StdError = Box<dyn std::error::Error + 'static + Sync + Send>;
/// A helper type for any result with standard error.
pub type StdResult<T> = Result<T, StdError>;
/// A helper type for any result with migra error.
pub type MigraResult<T> = Result<T, Error>;
/// Migra error
#[derive(Debug)]
#[non_exhaustive]
pub enum Error {
/// Represents database errors.
Db(DbError),
/// Represents standard input output errors.
Io(io::Error),
}
impl fmt::Display for Error {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Error::Db(ref error) => write!(fmt, "{}", error),
Error::Io(ref error) => write!(fmt, "{}", error),
}
}
}
impl std::error::Error for Error {}
impl PartialEq for Error {
fn eq(&self, other: &Self) -> bool {
std::mem::discriminant(self) == std::mem::discriminant(other)
}
}
impl From<io::Error> for Error {
#[inline]
fn from(err: io::Error) -> Error {
Error::Io(err)
}
}
impl Error {
/// Creates a database error.
#[must_use]
pub fn db(origin: StdError, kind: DbKind) -> Self {
Error::Db(DbError { kind, origin })
}
}
/// All kinds of errors with witch this crate works.
#[derive(Debug)]
#[non_exhaustive]
pub enum DbKind {
/// Failed to database connection.
DatabaseConnection,
/// Failed to open transaction.
OpenTransaction,
/// Failed to commit transaction.
CommitTransaction,
/// Failed to rollback transaction.
RollbackTransaction,
/// Failed to create a migrations table.
CreateMigrationsTable,
/// Failed to apply SQL.
ApplySql,
/// Failed to insert a migration.
InsertMigration,
/// Failed to delete a migration.
DeleteMigration,
/// Failed to get applied migrations.
GetAppliedMigrations,
}
impl fmt::Display for DbKind {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
DbKind::DatabaseConnection => fmt.write_str("Failed database connection"),
DbKind::OpenTransaction => fmt.write_str("Failed to open a transaction"),
DbKind::CommitTransaction => fmt.write_str("Failed to commit a transaction"),
DbKind::RollbackTransaction => fmt.write_str("Failed to rollback a transaction"),
DbKind::CreateMigrationsTable => fmt.write_str("Failed to create a migrations table"),
DbKind::ApplySql => fmt.write_str("Failed to apply sql"),
DbKind::InsertMigration => fmt.write_str("Failed to insert a migration"),
DbKind::DeleteMigration => fmt.write_str("Failed to delete a migration"),
DbKind::GetAppliedMigrations => fmt.write_str("Failed to get applied migrations"),
}
}
}
/// Represents database error.
#[derive(Debug)]
pub struct DbError {
kind: DbKind,
origin: StdError,
}
impl fmt::Display for DbError {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(fmt, "{} - {}", &self.kind, &self.origin)
}
}
impl DbError {
/// Returns database error kind.
#[must_use]
pub fn kind(&self) -> &DbKind {
&self.kind
}
/// Returns origin database error.
#[must_use]
pub fn origin(&self) -> &StdError {
&self.origin
}
}

34
migra/src/fs.rs Normal file
View File

@ -0,0 +1,34 @@
use crate::errors::MigraResult;
use crate::migration;
use std::io;
use std::path::Path;
/// Checks if the directory is a migration according to the principles of the crate.
#[must_use]
pub fn is_migration_dir(path: &Path) -> bool {
path.join("up.sql").exists() && path.join("down.sql").exists()
}
/// Get all migration directories from path and returns as [List].
///
/// This utility checks if the directory is a migration. See [`is_migration_dir`] for
/// more information.
///
/// [List]: migration::List
/// [is_migration_dir]: fs::is_migration_dir
pub fn get_all_migrations(dir_path: &Path) -> MigraResult<migration::List> {
let mut entries = match dir_path.read_dir() {
Err(e) if e.kind() == io::ErrorKind::NotFound => vec![],
entries => entries?
.filter_map(|res| res.ok().map(|e| e.path()))
.filter(|path| is_migration_dir(path))
.collect::<Vec<_>>(),
};
if entries.is_empty() {
return Ok(migration::List::new());
}
entries.sort();
Ok(migration::List::from(entries))
}

97
migra/src/lib.rs Normal file
View File

@ -0,0 +1,97 @@
//! # Migra
//!
//! Migra is a simple library for managing SQL in your application.
//!
//! For example, if you have a task list application, you can update the local user database from version to version.
//!
//! This is main crate for [migra-cli](https://crates.io/crates/migra-cli), which allows you to manege SQL for web
//! servers in any program language without being bound to SQL frameworks.
//!
//! ## Installation
//!
//! Add `migra = { version = "1.0" }` as a dependency in `Cargo.toml`.
//!
//! This crate has not required predefined database clients in features with similar name.
//! If you want to add them, just install crate with additional features (`postgres`, `mysql`, `sqlite`).
//!
//! `Cargo.toml` example:
//!
//! ```toml
//! [package]
//! name = "my-crate"
//! version = "0.1.0"
//! authors = ["Me <user@rust-lang.org>"]
//!
//! [dependencies]
//! migra = { version = "1.0", features = ["postgres"] }
//! ```
//!
//! ## Basic usage
//!
//! **Note:** This example requires to enable `sqlite` feature.
//!
//! ```rust
//! use migra::clients::{OpenDatabaseConnection, SqliteClient};
//! use migra::managers::{ManageTransaction, ManageMigrations};
//!
//! fn main() -> migra::Result<()> {
//! let mut client = SqliteClient::new(":memory:")?;
//!
//! client.create_migrations_table()?;
//!
//! let mut migrations = client.get_applied_migrations()?;
//!
//! client
//! .begin_transaction()
//! .and_then(|_| {
//! migrations.should_run_upgrade_migration(
//! &mut client,
//! "20210615_initial_migration",
//! r#"CREATE TABLE IF NOT EXISTS tasks (
//! title TEXT NOT NULL
//! );"#,
//! )?;
//!
//! Ok(())
//! })
//! .and_then(|res| client.commit_transaction().and(Ok(res)))
//! .or_else(|err| client.rollback_transaction().and(Err(err)));
//!
//! Ok(())
//! }
//! ```
//!
//! ### Supported databases
//!
//! | Database Client | Feature |
//! |-----------------|--------------|
//! | `Postgres` | postgres |
//! | `MySQL` | mysql |
//! | `Sqlite` | sqlite |
//!
#![deny(missing_debug_implementations)]
#![deny(missing_docs)]
#![deny(clippy::all, clippy::pedantic)]
// TODO: add missing errors doc
#![allow(clippy::missing_errors_doc)]
/// Includes additional client tools and contains predefined
/// database clients that have been enabled in the features.
pub mod clients;
/// Includes all types of errors that uses in the crate.
pub mod errors;
/// Includes utilities that use the file system to work.
pub mod fs;
/// Includes all the basic traits that will allow you
/// to create your own client.
pub mod managers;
/// Includes basic structures of migration and migration
/// lists, that are used in managers and fs utils.
pub mod migration;
pub use errors::{Error, MigraResult as Result, StdResult};
pub use migration::{List as MigrationList, Migration};

74
migra/src/managers.rs Normal file
View File

@ -0,0 +1,74 @@
use crate::errors::{DbKind, Error, MigraResult, StdResult};
use crate::migration;
/// Used to execute SQL.
///
/// Is a super trait for managers.
pub trait BatchExecute {
/// Executes sql via original database client
fn batch_execute(&mut self, sql: &str) -> StdResult<()>;
}
/// Used to manage transaction in the database connection.
pub trait ManageTransaction: BatchExecute {
/// Opens transaction in database connection.
fn begin_transaction(&mut self) -> MigraResult<()> {
self.batch_execute("BEGIN")
.map_err(|err| Error::db(err, DbKind::OpenTransaction))
}
/// Cancels (Rollbacks) transaction in database connection.
fn rollback_transaction(&mut self) -> MigraResult<()> {
self.batch_execute("ROLLBACK")
.map_err(|err| Error::db(err, DbKind::RollbackTransaction))
}
/// Apply (Commit) transaction in database connection.
fn commit_transaction(&mut self) -> MigraResult<()> {
self.batch_execute("COMMIT")
.map_err(|err| Error::db(err, DbKind::CommitTransaction))
}
}
/// Used to manage migrations in the database connection.
pub trait ManageMigrations: BatchExecute {
/// Applies SQL. Similar to [`BatchExecute`], but returns migra [Error].
///
/// [BatchExecute]: managers::BatchExecute
fn apply_sql(&mut self, sql: &str) -> MigraResult<()> {
self.batch_execute(sql)
.map_err(|err| Error::db(err, DbKind::ApplySql))
}
/// Creates migration table.
fn create_migrations_table(&mut self) -> MigraResult<()>;
/// Inserts new migration to table.
fn insert_migration(&mut self, name: &str) -> MigraResult<u64>;
/// Deletes migration from table.
fn delete_migration(&mut self, name: &str) -> MigraResult<u64>;
/// Get applied migrations from table.
fn get_applied_migrations(&mut self) -> MigraResult<migration::List>;
/// Applies SQL to upgrade database schema and inserts new migration to table.
///
/// **Note:** Must be run in a transaction otherwise if the migration causes any
/// error the data in the database may be inconsistent.
fn run_upgrade_migration(&mut self, name: &str, content: &str) -> MigraResult<()> {
self.apply_sql(content)?;
self.insert_migration(name)?;
Ok(())
}
/// Applies SQL to downgrade database schema and deletes migration from table.
///
/// **Note:** Must be run in a transaction otherwise if the migration causes any
/// error the data in the database may be inconsistent.
fn run_downgrade_migration(&mut self, name: &str, content: &str) -> MigraResult<()> {
self.apply_sql(content)?;
self.delete_migration(name)?;
Ok(())
}
}

239
migra/src/migration.rs Normal file
View File

@ -0,0 +1,239 @@
use crate::errors::MigraResult;
use crate::managers::ManageMigrations;
use std::iter::FromIterator;
/// A simple wrap over string.
#[derive(Debug, Clone, Default, PartialEq, Eq)]
pub struct Migration {
name: String,
}
impl Migration {
/// Creates new migration by name.
#[must_use]
pub fn new(name: &str) -> Self {
Migration {
name: name.to_owned(),
}
}
/// Returns name of migration.
#[must_use]
pub fn name(&self) -> &String {
&self.name
}
}
/// Wrap over migration vector. Can be implicitly converted to a vector and has
/// a few of additional utilities for handling migrations.
///
/// Can be presented as a list of all migrations, a list of pending migrations
/// or a list of applied migrations, depending on the implementation.
#[derive(Debug, Clone, Default, PartialEq, Eq)]
pub struct List {
inner: Vec<Migration>,
}
impl<T: AsRef<std::path::Path>> From<Vec<T>> for List {
fn from(list: Vec<T>) -> Self {
List {
inner: list
.iter()
.map(AsRef::as_ref)
.map(|path| {
path.file_name()
.and_then(std::ffi::OsStr::to_str)
.expect("Cannot read migration name")
})
.map(Migration::new)
.collect(),
}
}
}
impl From<Vec<Migration>> for List {
fn from(list: Vec<Migration>) -> Self {
List { inner: list }
}
}
impl FromIterator<Migration> for List {
fn from_iter<I: IntoIterator<Item = Migration>>(iter: I) -> Self {
let mut list = List::new();
for item in iter {
list.push(item);
}
list
}
}
impl<'a> FromIterator<&'a Migration> for List {
fn from_iter<I: IntoIterator<Item = &'a Migration>>(iter: I) -> Self {
let mut list = List::new();
for item in iter {
list.push(item.clone());
}
list
}
}
impl std::ops::Deref for List {
type Target = Vec<Migration>;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
impl List {
/// Creates empty migration list.
#[must_use]
pub fn new() -> Self {
List { inner: Vec::new() }
}
/// Push migration to list.
pub fn push(&mut self, migration: Migration) {
self.inner.push(migration);
}
/// Push migration name to list.
///
/// # Example
///
/// ```rust
/// # use migra::migration::List;
/// # let mut list = List::new();
/// list.push_name("name");
/// # assert_eq!(list, List::from(vec!["name"]));
/// ```
///
/// Is identical to the following
/// ```rust
/// # use migra::migration::{List, Migration};
/// # let mut list = List::new();
/// list.push(Migration::new("name"));
/// # assert_eq!(list, List::from(vec!["name"]));
/// ```
pub fn push_name(&mut self, name: &str) {
self.inner.push(Migration::new(name));
}
/// Check if list contains specific migration.
#[must_use]
pub fn contains(&self, other_migration: &Migration) -> bool {
self.inner
.iter()
.any(|migration| migration == other_migration)
}
/// Check if list contains migration with specific name.
#[must_use]
pub fn contains_name(&self, name: &str) -> bool {
self.inner.iter().any(|migration| migration.name() == name)
}
/// Exclude specific list from current list.
#[must_use]
pub fn exclude(&self, list: &List) -> List {
self.inner
.iter()
.filter(|migration| !list.contains_name(migration.name()))
.collect()
}
/// Runs a upgrade migration with SQL content and adds a new migration to the current list
/// If there is no migration migration with specific name in the list.
pub fn should_run_upgrade_migration(
&mut self,
client: &mut dyn ManageMigrations,
name: &str,
content: &str,
) -> MigraResult<bool> {
let is_missed = !self.contains_name(name);
if is_missed {
client.run_upgrade_migration(name, content)?;
self.push_name(name);
}
Ok(is_missed)
}
/// Runs a downgrade migration with SQL content and removes the last migration from the
/// current list if the last item in the list has the specified name.
pub fn should_run_downgrade_migration(
&mut self,
client: &mut dyn ManageMigrations,
name: &str,
content: &str,
) -> MigraResult<bool> {
let is_latest = self.inner.last() == Some(&Migration::new(name));
if is_latest {
client.run_downgrade_migration(name, content)?;
self.inner.pop();
}
Ok(is_latest)
}
}
#[cfg(test)]
mod tests {
use super::*;
const FIRST_MIGRATION: &str = "initial_migration";
const SECOND_MIGRATION: &str = "new_migration";
#[test]
fn push_migration_to_list() {
let mut list = List::new();
list.push(Migration::new(FIRST_MIGRATION));
assert_eq!(list, List::from(vec![FIRST_MIGRATION]));
list.push(Migration::new(SECOND_MIGRATION));
assert_eq!(list, List::from(vec![FIRST_MIGRATION, SECOND_MIGRATION]));
}
#[test]
fn push_name_to_list() {
let mut list = List::new();
list.push_name(FIRST_MIGRATION);
assert_eq!(list, List::from(vec![FIRST_MIGRATION]));
list.push_name(&String::from(SECOND_MIGRATION));
assert_eq!(list, List::from(vec![FIRST_MIGRATION, SECOND_MIGRATION]));
}
#[test]
fn contains_migration() {
let list = List::from(vec![FIRST_MIGRATION]);
assert!(list.contains(&Migration::new(FIRST_MIGRATION)));
assert!(!list.contains(&Migration::new(SECOND_MIGRATION)));
}
#[test]
fn contains_migration_name() {
let list = List::from(vec![FIRST_MIGRATION]);
assert!(list.contains_name(FIRST_MIGRATION));
assert!(!list.contains_name(SECOND_MIGRATION));
}
#[test]
fn create_excluded_migration_list() {
let all_migrations = List::from(vec![FIRST_MIGRATION, SECOND_MIGRATION]);
let applied_migrations = List::from(vec![FIRST_MIGRATION]);
let excluded = all_migrations.exclude(&applied_migrations);
assert_eq!(excluded, List::from(vec![SECOND_MIGRATION]));
}
}

View File

@ -1,6 +1,6 @@
[package]
name = "migra-cli"
version = "0.2.0"
version = "0.6.0"
authors = ["Dmitriy Pleshevskiy <dmitriy@ideascup.me>"]
edition = "2018"
description = "Simple SQL migration manager for your project"
@ -12,6 +12,32 @@ categories = ["accessibility", "database", "command-line-interface"]
readme = "../README.md"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[features]
default = ["postgres"]
postgres = ["migra/postgres"]
sqlite = ["migra/sqlite"]
mysql = ["migra/mysql"]
[dependencies]
migra = { version = "1", path = "../migra" }
cfg-if = "1.0"
structopt = "0.3"
serde = { version = "1.0", features = ["derive"] }
toml = "0.5"
chrono = "0.4"
dotenv = { version = "0.15", optional = true }
[dev-dependencies]
assert_cmd = "1"
predicates = "1"
client_postgres = { package = "postgres", version = "0.19" }
client_mysql = { package = "mysql", version = "20.1" }
client_rusqlite = { package = "rusqlite", version = "0.25" }
[badges]
maintenance = { status = "actively-developed" }
[[bin]]
name = "migra"
path = "src/main.rs"
@ -19,14 +45,3 @@ path = "src/main.rs"
[[test]]
name = "integration"
path = "tests/commands.rs"
[dependencies]
structopt = "0.3"
serde = { version = "1.0", features = ["derive"] }
toml = "0.5"
chrono = "0.4.19"
postgres = "0.19.0"
[dev-dependencies]
assert_cmd = "1.0.3"
predicates = "1.0.7"

58
migra_cli/src/app.rs Normal file
View File

@ -0,0 +1,58 @@
use crate::commands;
use crate::error::MigraResult;
use crate::opts::Command;
use crate::AppOpt;
use crate::Config;
use std::path::PathBuf;
use structopt::StructOpt;
#[derive(Debug, Clone)]
pub(crate) struct App {
app_opt: AppOpt,
}
impl App {
pub fn new(app_opt: AppOpt) -> Self {
App { app_opt }
}
pub fn config_path(&self) -> Option<&PathBuf> {
self.app_opt.config_path.as_ref()
}
pub fn config(&self) -> MigraResult<Config> {
Config::read(self.config_path())
}
pub fn run_command(&self) -> migra::StdResult<()> {
match self.app_opt.command.clone() {
Command::Init => {
commands::initialize_migra_manifest(self)?;
}
Command::Apply(ref cmd_opts) => {
commands::apply_sql(self, cmd_opts)?;
}
Command::Make(ref cmd_opts) => {
commands::make_migration(self, cmd_opts)?;
}
Command::List => {
commands::print_migration_lists(self)?;
}
Command::Upgrade(ref cmd_opts) => {
commands::upgrade_pending_migrations(self, cmd_opts)?;
}
Command::Downgrade(ref cmd_opts) => {
commands::rollback_applied_migrations(self, cmd_opts)?;
}
Command::Completions(cmd_opts) => {
AppOpt::clap().gen_completions_to(
env!("CARGO_BIN_NAME"),
cmd_opts.into(),
&mut std::io::stdout(),
);
}
}
Ok(())
}
}

View File

@ -0,0 +1,31 @@
use crate::app::App;
use crate::database;
use crate::opts::ApplyCommandOpt;
pub(crate) fn apply_sql(app: &App, cmd_opts: &ApplyCommandOpt) -> migra::StdResult<()> {
let config = app.config()?;
let mut client = database::create_client_from_config(&config)?;
let file_contents = cmd_opts
.file_paths
.clone()
.into_iter()
.map(|file_path| {
let mut file_path = config.directory_path().join(file_path);
if file_path.extension().is_none() {
file_path.set_extension("sql");
}
file_path
})
.map(std::fs::read_to_string)
.collect::<Result<Vec<_>, _>>()?;
database::run_in_transaction(&mut client, |client| {
file_contents
.iter()
.try_for_each(|content| client.apply_sql(content))
.map_err(From::from)
})?;
Ok(())
}

View File

@ -0,0 +1,50 @@
use crate::app::App;
use crate::database;
use crate::opts::DowngradeCommandOpt;
use std::cmp;
pub(crate) fn rollback_applied_migrations(
app: &App,
opts: &DowngradeCommandOpt,
) -> migra::StdResult<()> {
let config = app.config()?;
let mut client = database::create_client_from_config(&config)?;
client.create_migrations_table()?;
let migrations_dir_path = config.migration_dir_path();
let applied_migrations = client.get_applied_migrations()?;
let all_migrations = migra::fs::get_all_migrations(&migrations_dir_path)?;
let rollback_migrations_number = if opts.all_migrations {
applied_migrations.len()
} else {
cmp::min(opts.migrations_number, applied_migrations.len())
};
let migrations = applied_migrations[..rollback_migrations_number].to_vec();
let migrations_with_content = migrations
.iter()
.map(|migration| {
let migration_name = migration.name();
let migration_file_path = migrations_dir_path.join(migration_name).join("down.sql");
std::fs::read_to_string(migration_file_path).map(|content| (migration_name, content))
})
.collect::<Result<Vec<_>, _>>()?;
database::run_in_transaction(&mut client, |client| {
migrations_with_content
.iter()
.try_for_each(|(migration_name, content)| {
if all_migrations.contains_name(migration_name) {
println!("downgrade {}...", migration_name);
client.run_downgrade_migration(migration_name, content)
} else {
Ok(())
}
})
.map_err(From::from)
})?;
Ok(())
}

View File

@ -1,18 +1,19 @@
use crate::app::App;
use crate::config::{Config, MIGRA_TOML_FILENAME};
use crate::StdResult;
use std::path::PathBuf;
pub(crate) fn initialize_migra_manifest(config_path: Option<PathBuf>) -> StdResult<()> {
let config_path = config_path
.map(|mut config_path| {
pub(crate) fn initialize_migra_manifest(app: &App) -> migra::StdResult<()> {
let config_path = app.config_path().cloned().map_or_else(
|| PathBuf::from(MIGRA_TOML_FILENAME),
|mut config_path| {
let ext = config_path.extension();
if config_path.is_dir() || ext.is_none() {
config_path.push(MIGRA_TOML_FILENAME);
}
config_path
})
.unwrap_or_else(|| PathBuf::from(MIGRA_TOML_FILENAME));
},
);
if config_path.exists() {
println!("{} already exists", config_path.to_str().unwrap());

View File

@ -0,0 +1,66 @@
use crate::app::App;
use crate::database;
use crate::error::Error;
use migra::migration;
const EM_DASH: char = '—';
pub(crate) fn print_migration_lists(app: &App) -> migra::StdResult<()> {
let config = app.config()?;
let applied_migrations = match config.database.connection_string() {
Ok(ref database_connection_string) => {
let mut client = database::create_client(
&config.database.client(),
database_connection_string,
&config.migrations.table_name(),
)?;
let applied_migrations = client.get_applied_migrations().unwrap_or_else(|err| {
dbg!(err);
migration::List::new()
});
show_applied_migrations(&applied_migrations);
applied_migrations
}
Err(e) if e == Error::MissedEnvVar(String::new()) => {
eprintln!("WARNING: {}", e);
eprintln!("WARNING: No connection to database");
migration::List::new()
}
Err(e) => panic!("{}", e),
};
println!();
let all_migrations = migra::fs::get_all_migrations(&config.migration_dir_path())?;
let pending_migrations = all_migrations.exclude(&applied_migrations);
show_pending_migrations(&pending_migrations);
Ok(())
}
fn show_applied_migrations(applied_migrations: &migration::List) {
println!("Applied migrations:");
if applied_migrations.is_empty() {
println!("{}", EM_DASH);
} else {
applied_migrations
.iter()
.rev()
.for_each(|migration| println!("{}", migration.name()));
}
}
fn show_pending_migrations(pending_migrations: &migration::List) {
println!("Pending migrations:");
if pending_migrations.is_empty() {
println!("{}", EM_DASH);
} else {
pending_migrations.iter().for_each(|migration| {
println!("{}", migration.name());
});
}
}

View File

@ -1,11 +1,12 @@
use crate::app::App;
use crate::opts::MakeCommandOpt;
use crate::Config;
use crate::StdResult;
use chrono::Local;
use std::fs;
pub(crate) fn make_migration(config: Config, opts: MakeCommandOpt) -> StdResult<()> {
let now = Local::now().format("%y%m%d%H%M%S");
pub(crate) fn make_migration(app: &App, opts: &MakeCommandOpt) -> migra::StdResult<()> {
let config = app.config()?;
let date_format = config.migrations.date_format();
let formatted_current_timestamp = Local::now().format(&date_format);
let migration_name: String = opts
.migration_name
@ -17,9 +18,10 @@ pub(crate) fn make_migration(config: Config, opts: MakeCommandOpt) -> StdResult<
})
.collect();
let migration_dir_path = config
.migration_dir_path()
.join(format!("{}_{}", now, migration_name));
let migration_dir_path = config.migration_dir_path().join(format!(
"{}_{}",
formatted_current_timestamp, migration_name
));
if !migration_dir_path.exists() {
fs::create_dir_all(&migration_dir_path)?;
}

View File

@ -0,0 +1,66 @@
use crate::app::App;
use crate::database;
use crate::opts::UpgradeCommandOpt;
use migra::migration;
pub(crate) fn upgrade_pending_migrations(
app: &App,
opts: &UpgradeCommandOpt,
) -> migra::StdResult<()> {
let config = app.config()?;
let mut client = database::create_client_from_config(&config)?;
client.create_migrations_table()?;
let migrations_dir_path = config.migration_dir_path();
let applied_migration_names = client.get_applied_migrations()?;
let all_migrations = migra::fs::get_all_migrations(&migrations_dir_path)?;
let pending_migrations = all_migrations.exclude(&applied_migration_names);
if pending_migrations.is_empty() {
println!("Up to date");
return Ok(());
}
let migrations: migration::List = if let Some(migration_name) = opts.migration_name.clone() {
let target_migration = (*pending_migrations)
.clone()
.into_iter()
.find(|m| m.name() == &migration_name);
if let Some(migration) = target_migration {
vec![migration].into()
} else {
eprintln!(r#"Cannot find migration with "{}" name"#, migration_name);
return Ok(());
}
} else {
let upgrade_migrations_number = opts
.migrations_number
.unwrap_or_else(|| pending_migrations.len());
pending_migrations[..upgrade_migrations_number]
.to_vec()
.into()
};
let migrations_with_content = migrations
.iter()
.map(|migration| {
let migration_name = migration.name();
let migration_file_path = migrations_dir_path.join(migration_name).join("up.sql");
std::fs::read_to_string(migration_file_path).map(|content| (migration_name, content))
})
.collect::<Result<Vec<_>, _>>()?;
database::run_in_transaction(&mut client, |client| {
migrations_with_content
.iter()
.try_for_each(|(migration_name, content)| {
println!("upgrade {}...", migration_name);
client.run_upgrade_migration(migration_name, content)
})
.map_err(From::from)
})?;
Ok(())
}

290
migra_cli/src/config.rs Normal file
View File

@ -0,0 +1,290 @@
use crate::error::{Error, MigraResult};
use serde::{Deserialize, Serialize};
use std::path::{Path, PathBuf};
use std::{env, fs};
//===========================================================================//
// Internal Config Utils / Macros //
//===========================================================================//
fn search_for_directory_containing_file(path: &Path, file_name: &str) -> MigraResult<PathBuf> {
let file_path = path.join(file_name);
if file_path.is_file() {
Ok(path.to_owned())
} else {
path.parent()
.ok_or(Error::RootNotFound)
.and_then(|p| search_for_directory_containing_file(p, file_name))
}
}
fn recursive_find_project_root() -> MigraResult<PathBuf> {
let current_dir = std::env::current_dir()?;
search_for_directory_containing_file(&current_dir, MIGRA_TOML_FILENAME)
}
#[cfg(any(
not(feature = "postgres"),
not(feature = "mysql"),
not(feature = "sqlite")
))]
macro_rules! please_install_with {
(feature $database_name:expr) => {
panic!(
r#"You cannot use migra for "{database_name}".
You need to reinstall crate with "{database_name}" feature.
cargo install migra-cli --features ${database_name}"#,
database_name = $database_name
);
};
}
//===========================================================================//
// Database config //
//===========================================================================//
fn is_sqlite_database_file(filename: &str) -> bool {
filename
.rsplit('.')
.next()
.map(|ext| ext.eq_ignore_ascii_case("db"))
.unwrap_or_default()
}
fn default_database_connection_env() -> String {
String::from("$DATABASE_URL")
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum SupportedDatabaseClient {
#[cfg(feature = "postgres")]
Postgres,
#[cfg(feature = "mysql")]
Mysql,
#[cfg(feature = "sqlite")]
Sqlite,
}
impl Default for SupportedDatabaseClient {
fn default() -> Self {
cfg_if! {
if #[cfg(feature = "postgres")] {
SupportedDatabaseClient::Postgres
} else if #[cfg(feature = "mysql")] {
SupportedDatabaseClient::Mysql
} else if #[cfg(feature = "sqlite")] {
SupportedDatabaseClient::Sqlite
}
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) struct DatabaseConfig {
pub client: Option<SupportedDatabaseClient>,
#[serde(default = "default_database_connection_env")]
pub connection: String,
}
impl Default for DatabaseConfig {
fn default() -> Self {
DatabaseConfig {
connection: default_database_connection_env(),
client: None,
}
}
}
impl DatabaseConfig {
pub fn client(&self) -> SupportedDatabaseClient {
self.client.clone().unwrap_or_else(|| {
self.connection_string()
.ok()
.and_then(|connection_string| {
if connection_string.starts_with("postgres://") {
cfg_if! {
if #[cfg(feature = "postgres")] {
Some(SupportedDatabaseClient::Postgres)
} else {
please_install_with!(feature "postgres")
}
}
} else if connection_string.starts_with("mysql://") {
cfg_if! {
if #[cfg(feature = "mysql")] {
Some(SupportedDatabaseClient::Mysql)
} else {
please_install_with!(feature "mysql")
}
}
} else if is_sqlite_database_file(&connection_string) {
cfg_if! {
if #[cfg(feature = "sqlite")] {
Some(SupportedDatabaseClient::Sqlite)
} else {
please_install_with!(feature "sqlite")
}
}
} else {
None
}
})
.unwrap_or_default()
})
}
pub fn connection_string(&self) -> MigraResult<String> {
self.connection.strip_prefix('$').map_or_else(
|| Ok(self.connection.clone()),
|connection_env| {
env::var(connection_env)
.map_err(|_| Error::MissedEnvVar(connection_env.to_string()))
},
)
}
}
//===========================================================================//
// Migrations config //
//===========================================================================//
fn default_migrations_directory() -> String {
String::from("migrations")
}
fn default_migrations_table_name() -> String {
String::from("migrations")
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) struct MigrationsConfig {
#[serde(rename = "directory", default = "default_migrations_directory")]
directory: String,
#[serde(default = "default_migrations_table_name")]
table_name: String,
date_format: Option<String>,
}
impl Default for MigrationsConfig {
fn default() -> Self {
MigrationsConfig {
directory: default_migrations_directory(),
table_name: default_migrations_table_name(),
date_format: None,
}
}
}
impl MigrationsConfig {
pub fn directory(&self) -> String {
self.directory.strip_prefix('$').map_or_else(
|| self.directory.clone(),
|directory_env| {
env::var(directory_env).unwrap_or_else(|_| {
println!(
"WARN: Cannot read {} variable and use {} directory by default",
directory_env,
default_migrations_directory()
);
default_migrations_directory()
})
},
)
}
pub fn table_name(&self) -> String {
self.table_name.strip_prefix('$').map_or_else(
|| self.table_name.clone(),
|table_name_env| {
env::var(table_name_env).unwrap_or_else(|_| {
println!(
"WARN: Cannot read {} variable and use {} table_name by default",
table_name_env,
default_migrations_table_name()
);
default_migrations_table_name()
})
},
)
}
pub fn date_format(&self) -> String {
self.date_format
.clone()
.unwrap_or_else(|| String::from("%y%m%d%H%M%S"))
}
}
//===========================================================================//
// Main config //
//===========================================================================//
pub(crate) const MIGRA_TOML_FILENAME: &str = "Migra.toml";
#[derive(Debug, Serialize, Deserialize)]
pub struct Config {
#[serde(skip)]
manifest_root: PathBuf,
root: PathBuf,
#[serde(default)]
pub(crate) database: DatabaseConfig,
#[serde(default)]
pub(crate) migrations: MigrationsConfig,
}
impl Default for Config {
fn default() -> Config {
Config {
manifest_root: PathBuf::default(),
root: PathBuf::from("database"),
database: DatabaseConfig::default(),
migrations: MigrationsConfig::default(),
}
}
}
impl Config {
pub fn read(config_path: Option<&PathBuf>) -> MigraResult<Config> {
let config_path = match config_path {
Some(config_path) if config_path.is_dir() => {
Some(config_path.join(MIGRA_TOML_FILENAME))
}
Some(config_path) => Some(config_path.clone()),
None => recursive_find_project_root()
.map(|path| path.join(MIGRA_TOML_FILENAME))
.ok(),
};
match config_path {
None => Ok(Config::default()),
Some(config_path) => {
let content = fs::read_to_string(&config_path)?;
let mut config: Config = toml::from_str(&content).expect("Cannot parse Migra.toml");
config.manifest_root = config_path
.parent()
.unwrap_or_else(|| Path::new(""))
.to_path_buf();
Ok(config)
}
}
}
pub fn directory_path(&self) -> PathBuf {
self.manifest_root.join(&self.root)
}
pub fn migration_dir_path(&self) -> PathBuf {
self.directory_path().join(self.migrations.directory())
}
}

55
migra_cli/src/database.rs Normal file
View File

@ -0,0 +1,55 @@
use crate::config::SupportedDatabaseClient;
use crate::Config;
#[cfg(feature = "mysql")]
use migra::clients::MysqlClient;
#[cfg(feature = "postgres")]
use migra::clients::PostgresClient;
#[cfg(feature = "sqlite")]
use migra::clients::SqliteClient;
use migra::clients::{AnyClient, OpenDatabaseConnection};
pub fn create_client(
client_kind: &SupportedDatabaseClient,
connection_string: &str,
migrations_table_name: &str,
) -> migra::Result<AnyClient> {
let client: AnyClient = match client_kind {
#[cfg(feature = "postgres")]
SupportedDatabaseClient::Postgres => Box::new(PostgresClient::manual(
connection_string,
migrations_table_name,
)?),
#[cfg(feature = "mysql")]
SupportedDatabaseClient::Mysql => Box::new(MysqlClient::manual(
connection_string,
migrations_table_name,
)?),
#[cfg(feature = "sqlite")]
SupportedDatabaseClient::Sqlite => Box::new(SqliteClient::manual(
connection_string,
migrations_table_name,
)?),
};
Ok(client)
}
pub fn create_client_from_config(config: &Config) -> migra::StdResult<AnyClient> {
create_client(
&config.database.client(),
&config.database.connection_string()?,
&config.migrations.table_name(),
)
.map_err(From::from)
}
pub fn run_in_transaction<TrxFnMut>(client: &mut AnyClient, trx_fn: TrxFnMut) -> migra::Result<()>
where
TrxFnMut: FnOnce(&mut AnyClient) -> migra::Result<()>,
{
client
.begin_transaction()
.and_then(|_| trx_fn(client))
.and_then(|res| client.commit_transaction().and(Ok(res)))
.or_else(|err| client.rollback_transaction().and(Err(err)))
}

View File

@ -4,7 +4,6 @@ use std::io;
use std::mem;
use std::result;
pub type StdResult<T> = result::Result<T, Box<dyn std::error::Error>>;
pub type MigraResult<T> = result::Result<T, Error>;
#[derive(Debug)]

32
migra_cli/src/main.rs Normal file
View File

@ -0,0 +1,32 @@
#![deny(clippy::all, clippy::pedantic)]
#![forbid(unsafe_code)]
#[macro_use]
extern crate cfg_if;
#[cfg(not(any(feature = "postgres", feature = "mysql", feature = "sqlite")))]
compile_error!(
r#"Either features "postgres", "mysql" or "sqlite" must be enabled for "migra-cli" crate"#
);
mod app;
mod commands;
mod config;
mod database;
mod error;
pub use error::Error;
mod opts;
use app::App;
use config::Config;
use opts::{AppOpt, StructOpt};
fn main() {
#[cfg(feature = "dotenv")]
dotenv::dotenv().ok();
if let Err(err) = App::new(AppOpt::from_args()).run_command() {
panic!("Error: {}", err);
}
}

View File

@ -2,17 +2,17 @@ use std::path::PathBuf;
use structopt::clap;
pub use structopt::StructOpt;
#[derive(Debug, StructOpt)]
#[derive(Debug, StructOpt, Clone)]
#[structopt(bin_name = "migra", name = "Migra")]
pub(crate) struct AppOpt {
#[structopt(short, long)]
pub config: Option<PathBuf>,
#[structopt(name = "config", short, long)]
pub config_path: Option<PathBuf>,
#[structopt(subcommand)]
pub command: Command,
}
#[derive(Debug, StructOpt)]
#[derive(Debug, StructOpt, Clone)]
pub(crate) enum Command {
Init,
@ -32,20 +32,20 @@ pub(crate) enum Command {
Completions(CompletionsShell),
}
#[derive(Debug, StructOpt)]
#[derive(Debug, StructOpt, Clone)]
pub(crate) struct ApplyCommandOpt {
#[structopt(parse(from_str))]
pub file_name: String,
#[structopt(parse(from_os_str), required = true)]
pub file_paths: Vec<PathBuf>,
}
#[derive(Debug, StructOpt)]
#[derive(Debug, StructOpt, Clone)]
pub(crate) struct MakeCommandOpt {
/// Name of the migration to create in specify directory.
#[structopt(parse(from_str))]
pub migration_name: String,
}
#[derive(Debug, StructOpt)]
#[derive(Debug, StructOpt, Clone)]
pub(crate) struct UpgradeCommandOpt {
/// Name of the existing migration that will update the schema
/// in the database.
@ -57,7 +57,7 @@ pub(crate) struct UpgradeCommandOpt {
pub migrations_number: Option<usize>,
}
#[derive(Debug, StructOpt)]
#[derive(Debug, StructOpt, Clone)]
pub(crate) struct DowngradeCommandOpt {
/// How many applied migrations do we have to rollback.
#[structopt(long = "number", short = "n", default_value = "1")]
@ -68,7 +68,7 @@ pub(crate) struct DowngradeCommandOpt {
pub all_migrations: bool,
}
#[derive(Debug, StructOpt)]
#[derive(Debug, StructOpt, Clone)]
pub(crate) enum CompletionsShell {
Bash,
Fish,

687
migra_cli/tests/commands.rs Normal file
View File

@ -0,0 +1,687 @@
pub use assert_cmd::prelude::*;
pub use cfg_if::cfg_if;
use client_mysql::prelude::*;
pub use predicates::str::contains;
pub use std::process::Command;
pub type TestResult = std::result::Result<(), Box<dyn std::error::Error>>;
pub const ROOT_PATH: &str = concat!(env!("CARGO_MANIFEST_DIR"), "/tests/data/");
pub fn path_to_file<D: std::fmt::Display>(file_name: D) -> String {
format!("{}{}", ROOT_PATH, file_name)
}
pub fn database_manifest_path<D: std::fmt::Display>(database_name: D) -> String {
path_to_file(format!("Migra_{}.toml", database_name))
}
pub const DATABASE_URL_DEFAULT_ENV_NAME: &str = "DATABASE_URL";
pub const POSTGRES_URL: &str = "postgres://postgres:postgres@localhost:6000/migra_tests";
pub const MYSQL_URL: &str = "mysql://mysql:mysql@localhost:6001/migra_tests";
pub const SQLITE_URL: &str = "local.db";
pub fn remove_sqlite_db() -> TestResult {
std::fs::remove_file(SQLITE_URL).or(Ok(()))
}
pub struct Env {
key: &'static str,
}
impl Env {
pub fn new(key: &'static str, value: &'static str) -> Self {
std::env::set_var(key, value);
Env { key }
}
}
impl Drop for Env {
fn drop(&mut self) {
std::env::remove_var(self.key);
}
}
mod init {
use super::*;
use std::fs;
#[test]
fn init_manifest_with_default_config() -> TestResult {
let manifest_path = "Migra.toml";
fs::remove_file(&manifest_path).ok();
Command::cargo_bin("migra")?
.arg("init")
.assert()
.success()
.stdout(contains(format!("Created {}", &manifest_path)));
let content = fs::read_to_string(&manifest_path)?;
assert_eq!(
content,
r#"root = "database"
[database]
connection = "$DATABASE_URL"
[migrations]
directory = "migrations"
table_name = "migrations"
"#
);
fs::remove_file(&manifest_path)?;
Ok(())
}
#[test]
fn init_manifest_in_custom_path() -> TestResult {
let manifest_path = path_to_file("Migra.toml");
fs::remove_file(&manifest_path).ok();
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("init")
.assert()
.success()
.stdout(contains(format!("Created {}", manifest_path.as_str())));
let content = fs::read_to_string(&manifest_path)?;
assert_eq!(
content,
r#"root = "database"
[database]
connection = "$DATABASE_URL"
[migrations]
directory = "migrations"
table_name = "migrations"
"#
);
fs::remove_file(&manifest_path)?;
Ok(())
}
}
mod list {
use super::*;
#[test]
fn empty_migration_list() -> TestResult {
Command::cargo_bin("migra")?
.arg("ls")
.assert()
.success()
.stderr(contains(
r#"WARNING: Missed "DATABASE_URL" environment variable
WARNING: No connection to database"#,
))
.stdout(contains(
r#"
Pending migrations:
"#,
));
Ok(())
}
#[test]
fn empty_migration_list_with_db() -> TestResult {
fn inner(connection_string: &'static str) -> TestResult {
let env = Env::new(DATABASE_URL_DEFAULT_ENV_NAME, connection_string);
Command::cargo_bin("migra")?
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
Pending migrations:
"#,
));
drop(env);
Ok(())
}
#[cfg(feature = "postgres")]
inner(POSTGRES_URL)?;
#[cfg(feature = "mysql")]
inner(MYSQL_URL)?;
#[cfg(feature = "sqlite")]
remove_sqlite_db().and_then(|_| inner(SQLITE_URL))?;
Ok(())
}
#[test]
#[cfg(feature = "postgres")]
fn empty_migration_list_with_url_in_manifest() -> TestResult {
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_url_empty.toml"))
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
Pending migrations:
"#,
));
Ok(())
}
#[test]
#[cfg(feature = "postgres")]
fn empty_migration_list_with_env_in_manifest() -> TestResult {
let env = Env::new("DB_URL", POSTGRES_URL);
Command::cargo_bin("migra")?
.arg("-c")
.arg(path_to_file("Migra_env_empty.toml"))
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
Pending migrations:
"#,
));
drop(env);
Ok(())
}
#[test]
fn empty_applied_migrations() -> TestResult {
fn inner(database_name: &'static str) -> TestResult {
Command::cargo_bin("migra")?
.arg("-c")
.arg(database_manifest_path(database_name))
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
Pending migrations:
210218232851_create_articles
210218233414_create_persons
"#,
));
Ok(())
}
#[cfg(feature = "postgres")]
inner("postgres")?;
#[cfg(feature = "mysql")]
inner("mysql")?;
#[cfg(feature = "sqlite")]
remove_sqlite_db().and_then(|_| inner("sqlite"))?;
Ok(())
}
#[test]
fn applied_all_migrations() -> TestResult {
fn inner(database_name: &'static str) -> TestResult {
let manifest_path = database_manifest_path(database_name);
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("up")
.assert()
.success();
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
210218232851_create_articles
210218233414_create_persons
Pending migrations:
"#,
));
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("down")
.arg("--all")
.assert()
.success();
Ok(())
}
#[cfg(feature = "postgres")]
inner("postgres")?;
#[cfg(feature = "mysql")]
inner("mysql")?;
#[cfg(feature = "sqlite")]
remove_sqlite_db().and_then(|_| inner("sqlite"))?;
Ok(())
}
#[test]
fn applied_one_migrations() -> TestResult {
fn inner(database_name: &'static str) -> TestResult {
let manifest_path = database_manifest_path(database_name);
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("up")
.arg("-n")
.arg("1")
.assert()
.success();
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("ls")
.assert()
.success()
.stdout(contains(
r#"Applied migrations:
210218232851_create_articles
Pending migrations:
210218233414_create_persons
"#,
));
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("down")
.assert()
.success();
Ok(())
}
#[cfg(feature = "postgres")]
inner("postgres")?;
#[cfg(feature = "mysql")]
inner("mysql")?;
#[cfg(feature = "sqlite")]
remove_sqlite_db().and_then(|_| inner("sqlite"))?;
Ok(())
}
}
mod make {
use super::*;
use std::fs;
#[test]
fn make_migration_directory() -> TestResult {
fn inner(database_name: &'static str) -> TestResult {
Command::cargo_bin("migra")?
.arg("-c")
.arg(database_manifest_path(database_name))
.arg("make")
.arg("test")
.assert()
.success()
.stdout(contains("Structure for migration has been created in"));
let entries = fs::read_dir(path_to_file(format!("{}/migrations", database_name)))?
.map(|entry| entry.map(|e| e.path()))
.collect::<Result<Vec<_>, std::io::Error>>()?;
let dir_paths = entries
.iter()
.filter_map(|path| {
path.to_str().and_then(|path| {
if path.ends_with("_test") {
Some(path)
} else {
None
}
})
})
.collect::<Vec<_>>();
for dir_path in dir_paths.iter() {
let upgrade_content = fs::read_to_string(format!("{}/up.sql", dir_path))?;
let downgrade_content = fs::read_to_string(format!("{}/down.sql", dir_path))?;
assert_eq!(upgrade_content, "-- Your SQL goes here\n\n");
assert_eq!(
downgrade_content,
"-- This file should undo anything in `up.sql`\n\n"
);
fs::remove_dir_all(dir_path)?;
}
Ok(())
}
#[cfg(feature = "postgres")]
inner("postgres")?;
#[cfg(feature = "mysql")]
inner("mysql")?;
#[cfg(feature = "sqlite")]
remove_sqlite_db().and_then(|_| inner("sqlite"))?;
Ok(())
}
}
mod upgrade {
use super::*;
#[test]
fn applied_all_migrations() -> TestResult {
fn inner<ValidateFn>(database_name: &'static str, validate: ValidateFn) -> TestResult
where
ValidateFn: Fn() -> TestResult,
{
let manifest_path = database_manifest_path(database_name);
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("up")
.assert()
.success();
validate()?;
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("down")
.arg("--all")
.assert()
.success();
Ok(())
}
#[cfg(feature = "postgres")]
inner("postgres", || {
let mut conn = client_postgres::Client::connect(POSTGRES_URL, client_postgres::NoTls)?;
let res = conn.query("SELECT p.id, a.id FROM persons AS p, articles AS a", &[])?;
assert_eq!(
res.into_iter()
.map(|row| (row.get(0), row.get(1)))
.collect::<Vec<(i32, i32)>>(),
Vec::new()
);
Ok(())
})?;
#[cfg(feature = "mysql")]
inner("mysql", || {
let pool = client_mysql::Pool::new(MYSQL_URL)?;
let mut conn = pool.get_conn()?;
let res = conn.query_drop("SELECT p.id, a.id FROM persons AS p, articles AS a")?;
assert_eq!(res, ());
Ok(())
})?;
#[cfg(feature = "sqlite")]
remove_sqlite_db().and_then(|_| {
inner("sqlite", || {
let conn = client_rusqlite::Connection::open(SQLITE_URL)?;
let res =
conn.execute_batch("SELECT p.id, a.id FROM persons AS p, articles AS a")?;
assert_eq!(res, ());
Ok(())
})
})?;
Ok(())
}
#[test]
fn cannot_applied_invalid_migrations_in_single_transaction() -> TestResult {
fn inner<ValidateFn>(database_name: &'static str, validate: ValidateFn) -> TestResult
where
ValidateFn: Fn() -> TestResult,
{
let manifest_path = database_manifest_path(database_name);
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("up")
.arg("--single-transaction")
.assert()
.failure();
validate()?;
Ok(())
}
#[cfg(feature = "postgres")]
inner("postgres_invalid", || {
let mut conn = client_postgres::Client::connect(POSTGRES_URL, client_postgres::NoTls)?;
let articles_res = conn.query("SELECT a.id FROM articles AS a", &[]);
let persons_res = conn.query("SELECT p.id FROM persons AS p", &[]);
assert!(articles_res.is_err());
assert!(persons_res.is_err());
Ok(())
})?;
#[cfg(feature = "sqlite")]
remove_sqlite_db().and_then(|_| {
inner("sqlite_invalid", || {
let conn = client_rusqlite::Connection::open(SQLITE_URL)?;
let articles_res = conn.execute_batch("SELECT a.id FROM articles AS a");
let persons_res = conn.execute_batch("SELECT p.id FROM persons AS p");
assert!(articles_res.is_err());
assert!(persons_res.is_err());
Ok(())
})
})?;
// mysql doesn't support DDL in transaction 🤷
Ok(())
}
}
mod apply {
use super::*;
#[test]
fn apply_files() -> TestResult {
fn inner<ValidateFn>(
database_name: &'static str,
file_paths: Vec<&'static str>,
validate: ValidateFn,
) -> TestResult
where
ValidateFn: Fn() -> TestResult,
{
let manifest_path = database_manifest_path(database_name);
Command::cargo_bin("migra")?
.arg("-c")
.arg(&manifest_path)
.arg("apply")
.args(file_paths)
.assert()
.success();
validate()?;
Ok(())
}
cfg_if! {
if #[cfg(feature = "postgres")] {
inner(
"postgres",
vec![
"migrations/210218232851_create_articles/up",
"migrations/210218233414_create_persons/up",
],
|| {
let mut conn = client_postgres::Client::connect(POSTGRES_URL, client_postgres::NoTls)?;
let res = conn.query("SELECT p.id, a.id FROM persons AS p, articles AS a", &[])?;
assert_eq!(
res.into_iter()
.map(|row| (row.get(0), row.get(1)))
.collect::<Vec<(i32, i32)>>(),
Vec::new()
);
Ok(())
},
)?;
inner(
"postgres",
vec![
"migrations/210218233414_create_persons/down",
"migrations/210218232851_create_articles/down",
],
|| {
let mut conn = client_postgres::Client::connect(POSTGRES_URL, client_postgres::NoTls)?;
let res = conn.query("SELECT p.id, a.id FROM persons AS p, articles AS a", &[]);
assert!(res.is_err());
Ok(())
},
)?;
}
}
cfg_if! {
if #[cfg(feature = "mysql")] {
inner(
"mysql",
vec![
"migrations/210218232851_create_articles/up",
"migrations/210218233414_create_persons/up",
],
|| {
let pool = client_mysql::Pool::new(MYSQL_URL)?;
let mut conn = pool.get_conn()?;
let res = conn.query_drop("SELECT p.id, a.id FROM persons AS p, articles AS a")?;
assert_eq!(res, ());
Ok(())
},
)?;
inner(
"mysql",
vec![
"migrations/210218233414_create_persons/down",
"migrations/210218232851_create_articles/down",
],
|| {
let pool = client_mysql::Pool::new(MYSQL_URL)?;
let mut conn = pool.get_conn()?;
let res = conn.query_drop("SELECT p.id, a.id FROM persons AS p, articles AS a");
assert!(res.is_err());
Ok(())
}
)?;
}
}
#[cfg(feature = "sqlite")]
remove_sqlite_db().and_then(|_| {
inner(
"sqlite",
vec![
"migrations/210218232851_create_articles/up",
"migrations/210218233414_create_persons/up",
],
|| {
let conn = client_rusqlite::Connection::open(SQLITE_URL)?;
let res =
conn.execute_batch("SELECT p.id, a.id FROM persons AS p, articles AS a")?;
assert_eq!(res, ());
Ok(())
},
)?;
inner(
"sqlite",
vec![
"migrations/210218233414_create_persons/down",
"migrations/210218232851_create_articles/down",
],
|| {
let conn = client_rusqlite::Connection::open(SQLITE_URL)?;
let res =
conn.execute_batch("SELECT p.id, a.id FROM persons AS p, articles AS a");
assert!(res.is_err());
Ok(())
},
)
})?;
Ok(())
}
}

View File

@ -1,4 +1,4 @@
root = "./"
root = "./postgres"
[database]
connection = "$DATABASE_URL"

View File

@ -0,0 +1,4 @@
root = "./mysql"
[database]
connection = "mysql://mysql:mysql@localhost:6001/migra_tests"

View File

@ -1,4 +1,4 @@
root = "./"
root = "./postgres"
[database]
connection = "postgres://postgres:postgres@localhost:6000/migra_tests"

View File

@ -0,0 +1,4 @@
root = "./postgres_invalid"
[database]
connection = "postgres://postgres:postgres@localhost:6000/migra_tests"

View File

@ -0,0 +1,4 @@
root = "./sqlite"
[database]
connection = "local.db"

View File

@ -0,0 +1,4 @@
root = "./sqlite_invalid"
[database]
connection = "local.db"

View File

@ -0,0 +1,8 @@
-- Your SQL goes here
CREATE TABLE articles (
id int AUTO_INCREMENT PRIMARY KEY,
title text NOT NULL CHECK (length(title) > 0),
content text NOT NULL,
created_at timestamp NOT NULL DEFAULT current_timestamp
);

View File

@ -0,0 +1,12 @@
-- Your SQL goes here
CREATE TABLE persons (
id int AUTO_INCREMENT PRIMARY KEY,
email varchar(256) NOT NULL UNIQUE,
display_name text NOT NULL,
created_at timestamp NOT NULL DEFAULT current_timestamp
);
ALTER TABLE articles
ADD COLUMN author_person_id int NULL
REFERENCES persons (id) ON UPDATE CASCADE ON DELETE CASCADE;

View File

@ -0,0 +1,3 @@
-- This file should undo anything in `up.sql`
DROP TABLE articles;

View File

@ -0,0 +1,6 @@
-- This file should undo anything in `up.sql`
ALTER TABLE articles
DROP COLUMN author_person_id;
DROP TABLE persons;

View File

@ -0,0 +1,3 @@
-- This file should undo anything in `up.sql`
DROP TABLE articles;

View File

@ -0,0 +1,8 @@
-- Your SQL goes here
CREATE TABLE articles (
id serial PRIMARY KEY,
title text NOT NULL CHECK (length(title) > 0),
content text NOT NULL,
created_at timestamp NOT NULL DEFAULT current_timestamp
);

View File

@ -0,0 +1,6 @@
-- This file should undo anything in `up.sql`
ALTER TABLE articles
DROP COLUMN author_person_id;
DROP TABLE persons;

View File

@ -0,0 +1,14 @@
-- Your SQL goes here
CREATE TABLE persons (
id SERIAL PRIMARY KEY,
email text NOT NULL UNIQUE,
display_name text NOT NULL,
created_at timestamp NOT NULL DEFAULT current_timestamp
);
/* This table doesn't exist
*/
ALTER TABLE recipes
ADD COLUMN author_person_id int NULL
REFERENCES persons (id) ON UPDATE CASCADE ON DELETE CASCADE;

View File

@ -0,0 +1,3 @@
-- This file should undo anything in `up.sql`
DROP TABLE articles;

View File

@ -0,0 +1,8 @@
-- Your SQL goes here
CREATE TABLE articles (
id int AUTO_INCREMENT PRIMARY KEY,
title text NOT NULL CHECK (length(title) > 0),
content text NOT NULL,
created_at timestamp NOT NULL DEFAULT current_timestamp
);

View File

@ -0,0 +1,16 @@
-- This file should undo anything in `up.sql`
CREATE TABLE tmp_articles (
id int AUTO_INCREMENT PRIMARY KEY,
title text NOT NULL CHECK (length(title) > 0),
content text NOT NULL,
created_at timestamp NOT NULL DEFAULT current_timestamp
);
INSERT INTO tmp_articles (id, title, content, created_at)
SELECT id, title, content, created_at FROM articles;
DROP TABLE articles;
ALTER TABLE tmp_articles RENAME TO articles;
DROP TABLE persons;

View File

@ -0,0 +1,12 @@
-- Your SQL goes here
CREATE TABLE persons (
id int AUTO_INCREMENT PRIMARY KEY,
email varchar(256) NOT NULL UNIQUE,
display_name text NOT NULL,
created_at timestamp NOT NULL DEFAULT current_timestamp
);
ALTER TABLE articles
ADD COLUMN author_person_id int NULL
REFERENCES persons (id) ON UPDATE CASCADE ON DELETE CASCADE;

View File

@ -0,0 +1,3 @@
-- This file should undo anything in `up.sql`
DROP TABLE articles;

View File

@ -0,0 +1,8 @@
-- Your SQL goes here
CREATE TABLE articles (
id int AUTO_INCREMENT PRIMARY KEY,
title text NOT NULL CHECK (length(title) > 0),
content text NOT NULL,
created_at timestamp NOT NULL DEFAULT current_timestamp
);

View File

@ -0,0 +1,16 @@
-- This file should undo anything in `up.sql`
CREATE TABLE tmp_articles (
id int AUTO_INCREMENT PRIMARY KEY,
title text NOT NULL CHECK (length(title) > 0),
content text NOT NULL,
created_at timestamp NOT NULL DEFAULT current_timestamp
);
INSERT INTO tmp_articles (id, title, content, created_at)
SELECT id, title, content, created_at FROM articles;
DROP TABLE articles;
ALTER TABLE tmp_articles RENAME TO articles;
DROP TABLE persons;

View File

@ -0,0 +1,14 @@
-- Your SQL goes here
CREATE TABLE persons (
id int AUTO_INCREMENT PRIMARY KEY,
email varchar(256) NOT NULL UNIQUE,
display_name text NOT NULL,
created_at timestamp NOT NULL DEFAULT current_timestamp
);
/* This table doesn't exist
*/
ALTER TABLE recipes
ADD COLUMN author_person_id int NULL
REFERENCES persons (id) ON UPDATE CASCADE ON DELETE CASCADE;